OSGeo Planet

Nyall Dawson: New map coloring algorithms in QGIS 3.0

OSGeo Planet - Wed, 2017-02-22 06:17

It’s been a long time since I last blogged here. Let’s just blame that on the amount of changes going into QGIS 3.0 and move on…

One new feature which landed in QGIS 3.0 today is a processing algorithm for automatic coloring of a map in such a way that adjoining polygons are all assigned different color indexes. Astute readers may be aware that this was possible in earlier versions of QGIS through the use of either the (QGIS 1.x only!) Topocolor plugin, or the Coloring a map plugin (2.x).

What’s interesting about this new processing algorithm is that it introduces several refinements for cartographically optimising the coloring. The earlier plugins both operated by pure “graph” coloring techniques. What this means is that first a graph consisting of each set of adjoining features is generated. Then, based purely on this abstract graph, the coloring algorithms are applied to optimise the solution so that connected graph nodes are assigned different colors, whilst keeping the total number of colors required minimised.

The new QGIS algorithm works in a different way. Whilst the first step is still calculating the graph of adjoining features (now super-fast due to use of spatial indexes and prepared geometry intersection tests!), the colors for the graph are assigned while considering the spatial arrangement of all features. It’s gone from a purely abstract mathematical solution to a context-sensitive cartographic solution.

The “Topological coloring” processing algorithm

Let’s explore the differences. First up, the algorithm has an option for the “minimum distance between features”. It’s often the case that features aren’t really touching, but are instead just very close to each other. Even though they aren’t touching, we still don’t want these features to be assigned the same color. This option allows you to control the minimum distance which two features can be to each other before they can be assigned the same color.

The biggest change comes in the “balancing” techniques available in the new algorithm. By default, the algorithm now tries to assign colors in such a way that the total number of features assigned each color is equalised. This avoids having a color which is only assigned to a couple of features in a large dataset, resulting in an odd looking map coloration.

Balancing color assignment by count – notice how each class has a (almost!) equal count

Another available balancing technique is to balance the color assignment by total area. This technique assigns colors so that the total area of the features assigned to each color is balanced. This mode can be useful to help avoid large features resulting in one of the colors appearing more dominant on a colored map.

Balancing assignment by area – note how only one large feature is assigned the red color

The final technique, and my personal preference, is to balance colors by distance between colors. This mode will assign colors in order to maximize the distance between features of the same color. Maximising the distance helps to create a more uniform distribution of colors across a map, and avoids certain colors clustering in a particular area of the map. It’s my preference as it creates a really nice balanced map – at a glance the colors look “randomly” assigned with no discernible pattern to the arrangement.

Balancing colors by distance

As these examples show, considering the geographic arrangement of features while coloring allows us to optimise the assigned colors for cartographic output.

The other nice thing about having this feature implemented as a processing algorithm is that unlike standalone plugins, processing algorithms can be incorporated as just one step of a larger model (and also reused by other plugins!).

QGIS 3.0 has tons of great new features, speed boosts and stability bumps. This is just a tiny taste of the handy new features which will be available when 3.0 is released!

Categories: OSGeo Planet

GeoSolutions: First release of 2017 for MapStore 2 plus WFS Support

OSGeo Planet - Tue, 2017-02-21 16:37

blog

Dear Readers,

we are pleased to announce a new release of MapStore 2, our flagship Open Source webgis product, which we have called 2017.01.00. The full list of changes for this release can be found here, but let us now concentrate on the latest most interesting additions.

WFS Query on vector layers

We have introduced the option to query vector layers via the OGC WFS protocol. This functionality can be accessed from the Layer TOC (Table Of Content) once a layer that which advertise the OGC WFS procotols is added from the OGC CSW catalog widget.

[caption id="attachment_3281" align="alignnone" width="600"]Adding a layer from the Catalog Adding a layer from the Catalog[/caption]

If a layer in the TOC supports querying via OGC WFS protocol a magnifier icon under the “Display legend and tools” will appear as shown below.

[caption id="attachment_3283" align="alignnone" width="600"]Magnifier Icon for WFS Query Magnifier Icon for WFS Query[/caption]

A click on the icon will open a form, where you can choose to filter by attribute, spatially or both. Filters can be setup to match any, all or none of the specified conditions and complex queries with subfilters can be created.

[caption id="attachment_3284" align="alignnone" width="600"]WFS Query Builder WFS Query Builder[/caption] If the query is successful the first page of the results will be downloaded and put in a grid widget; such widget supports usual features like zoom to row, column selection and export as CSV (more formats will be added in the future). [caption id="attachment_3296" align="alignnone" width="600"]Results of a WFS query Results of a WFS query[/caption] [caption id="attachment_3285" align="alignnone" width="600"]image01 Exporting WFS results as CSV[/caption]

Standalone binary package

We have created a downloadable standalone package for you to test MapStore2 on your own machine. It is Windows and Linux compatible and it contains all you need to run MapStore 2, no installation required. All you have to do is:

  • unzip to a location on your computer
  • run the .bat command if on Windows or the

Please refer to the online documentation for further informations.

Developer notes

NPM 2 support has been dropped, MapStore2 now support:
  • NodeJS >= 4.6.1
  • NPM >= 3.x

What we are working on

We have a number of functionalities in our plans (editing, advanced templating, OAUTH 2.0, etc...); for the next release we are focusing on the following ones:

  • Balloon Tutorial, for a more modern help system. More information can be found here
  • Improved Developers Documentation
  • Https support so that we use geolocation on Chrome as well
  • Side effect management with redux-observable (for developers)
  • Advanced Theming, which should allow us to easily change themes using an high level language like Less

If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapstore, GeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions team,
Categories: OSGeo Planet

Jackie Ng: React-ing to the need for a modern MapGuide viewer (Part 13): My first* pull request

OSGeo Planet - Tue, 2017-02-21 12:05
Previously, I switched our testing stack for mapguide-react-layout over to Jest, which had some positive flow-on effects, like being able to finally upgrade to Webpack 2 and being able to try out the new OpenLayers npm package, resulting in a nice reduction in production bundle size due to only pulling the bits of OpenLayers that we are actually using. Jest also has code coverage built in, and by piping its coverage output to node-coveralls, TravisCI will automatically upload said coverage reports to coveralls.io resulting in yet another shiny badge to show on our project page.

These badges are becoming like Pokemon: I just want to catch 'em all.

So the next badge for me to collect was greenkeeper. Greenkeeper is a free service that monitors your GitHub repository and keeps your node package dependencies up to date. So last night I enabled greenkeeper integration for mapguide-react-layout.

Today I got a GitHub notification for a new pull request on mapguide-react-layout. Great! I love pull requests. Except, this pull request is not from a human, it's from the greenkeeper bot (*my first non-human pull request). Looking at the pull request in detail was most amusing.


A bot (coveralls) commenting on a pull request opened by another bot (greenkeeper)!

I wonder how many pull requests out there are nothing but full of bot-on-bot comments? How deep does this bot rabbit hole go?

When bots can start writing their own code, I think that's when we can pack it in as the human race and submit to our bot overlords.
Categories: OSGeo Planet

Jackie Ng: React-ing to the need for a modern MapGuide viewer (Part 12): A positive cascading effect

OSGeo Planet - Tue, 2017-02-21 10:55
The move to Jest for our testing/coverage needs has opened up some opportunities that were previously roadblocks.

Mainly, we can finally upgrade to Webpack 2. Previously, we were roadblocked because the karma runner just wouldn't work with webpack 2 configurations. Also unlike earlier attempts with Webpack 2 beta releases, this upgrade to Webpack 2 was less painful and more importantly, the bundle size remained the same.

Also OpenLayers recently released 4.0.0, which also includes experimental ES2015 modules, the ES2015 module facilitates a "pay for only what you use" model which is great for us as we don't necessarily want to use the kitchen sink, only the parts of the library we actually use. It turns out based on their webpack example that it requires Webpack 2 to work as Webpack 1 will include said modules verbatim causing most browsers to blow up on the various ES2015 language constructs (like imports).

Well, how convenient that we just upgraded to Webpack 2! Switching over to the new ol package and its ES2015 modules, and making the required fixes in our codebase to use this new package, and checking the final production bundle size shows promise.



That is 150kb smaller than our current production bundle! Once other libraries we're using adopt ES2015 modules, we can expect even more weight loss.
Categories: OSGeo Planet

gvSIG Team: Learning GIS with Game of Thrones (VII): Adding coordinates to a Table

OSGeo Planet - Tue, 2017-02-21 05:30

Now we are going to see a very easy and useful tool. It allows to add X and Y coordinates (or Latitude/Longitude) to a point layer in an automatic way. In our case, with fictitious cartography on EPSG 4326 projection system (the system used by GPS), we will get the coordinates that represent latitude and longitude.

We have “Locations” point layer, and we are going to check the “Add X and Y” tool.

Firstly we are going to put “Locations” layer active, and we open its attribute table (as we saw in the “Tables” post).

Then we run the tool, from the “Table/Add measure/Add X and Y”, or from its corresponding button:046_got

We will see that two new columns will be added to the attribute table, with the information of the coordinates.047_got

And now we can send our dragons to the exact coordinates

See you in the next post…


Filed under: english, gvSIG Desktop, training Tagged: Adding coordinates, Game of Thrones
Categories: OSGeo Planet

GeoTools Team: GeoTools 16.2 Released

OSGeo Planet - Mon, 2017-02-20 22:00
The GeoTools team is pleased to announce the release of GeoTools 16.2:This release is also available from our maven repository.

This release is made in conjunction with GeoWebCache 1.10.2 and GeoServer 2.10.2.

GeoTools 16.2 is the latest stable release of the 16.x series and is recommended for all new projects.Features and Improvements
  • Graduate YSLD module to supported status
  • Implement Cylindrical Equal Area Projection
  • Relax visibility of StyledShapePainter to allow override of vector fill in subclasses 
Bug Fixes
  • Improve label positioning when using follow line vendor option
  • Fix CRS.getCoordinateOperationFactory scalability bottleneck
  • Make GridCoverarageRenderer turn nodata/out of ROI pixels into transparent before rendering onto Graphics2D
  • Various ImageMosaic optimizations and bugfixes
And more! For more information please see the release notes (16.216.1 | 16.0 | 16-RC1 | M0 | beta).About GeoTools 16
  • The wfs-ng module is now a drop in replacement and will be replacing gt-wfs
  • The NetCDF module now uses NetCDF-Java 4.6.6
Categories: OSGeo Planet

Fernando Quadro: eBook: Open Source no Brasil

OSGeo Planet - Mon, 2017-02-20 18:59

Nesse relatório, seu autor, Andy Oram, explora as várias tendências nos negócios, no ensino e nas políticas públicas que contribuíram para o estado atual da atividade open source no Brasil. Você vai descobrir a comunidade open source no país, seus movimentos de software livre, o envolvimento dos negócios e da força de trabalho, e as questões relativas à educação.

Apesar de seus problemas—a corrupção no governo, os problemas na saúde pública e as altas taxas de criminalidade—o Brasil ainda é uma das economias mais vibrantes da América Latina. Com suas fortes indústrias de extração, de produção e de serviços, a TI no Brasil está em expansão, à medida que as empresas buscam digitalizar suas operações. As startups de tecnologia também estão surgindo, e o software livre e o open source estão por toda parte.

Você pode baixar o eBook gratuitamente no site da O’Reilly, e descobrir um pouco mais sobre como os gringos veem o nosso país. Vale a leitura!

Posts RelacionadosSovrn
Categories: OSGeo Planet

gvSIG Team: Learning GIS with Game of Thrones (VI): Hyperlink and other information tools

OSGeo Planet - Mon, 2017-02-20 15:41

Today we are going to see information tools, focusing on learning to use the “Hyperlink” tool.

There are 4 main information tools: information by point, consulting area, consulting distance and hyperlink. We would be able to add other ones like “Google Street View” that allows us to consult the images of this Google service… besides in this case there aren’t Google cars riding in the Game of Thrones landscape yet.

These 4 tools are available from the toolbar:

034_got

The first three tools are very intuitive and you can test them just only explaining their working.

Information by point: it gives us information about the element that we click on, having its layer active. It will show a window with the values of that element from the attribute table. For example, if we have “Locations” layer selected and we press on the point that represents “King’s Landing”, the next window will be opened:

e035_got

Consulting area and distance tools work in a similar way. Once the tool is selected, we click on the View, and we will see the information about area and perimeter in one case, and about partial and total distance in the other case. This information is shown in the lower part of the screen, at the state bar (where we can see another information like scale, coordinates or units).

e036_got

The hyperlink is more complex because the settings have to be configured previously at the layer “Properties”. We are going to see a practical example: 

Reviewing the previous post, “Editing Tables”, we are going to add a series of links to websites about houses of Game of Thrones. They will be added to the “Web” field of the attribute table of “Political” layer:

Results will be similar to these ones:

038_got

Now we are going to indicate to the layer that the “Web” field contains links to websites.

To open the Layer properties window we click on the layer name with the secondary button of the mouse at the Table of Contents, or we access from the “Layer/Properties” menu, having the layer active.

e037_got

At the new window we go to “Hyperlink” tab, the tab that we are interested in now.

We press “Enable hyperlink”, and we select the “Web” field and the “Link to text and HTML files” action.

e038_got

Now we can close this window already, clicking on the “Accept” button and we can start to use the hyperlink button on the “Political” layer.

What happens when we click on an element? …a browser is opened (that by the way it will be improved in the next version) with information about the web page indicated at the attribute table. In this case we will get the information about each house. For example, when we click on “The North” kingdom it will link to the information of the House Stark:

e039_got

Now we are going to create another type of hyperlink, that will open an image that we have in our computer. In our case, we will see the shield of every house, that you can download from this zip file.

For that, firstly we are going to start editing mode at the “Political” layer and we are going to add the information about the path to the images in your computer, in the “Shield” field. For example:

  • /home/alvaro/Escritorio/Shields/Arryn.PNG
  • /home/alvaro/Escritorio/Shields/Baratheon.PNG
  • /home/alvaro/Escritorio/Shields/Greyjoy.PNG
  • /home/alvaro/Escritorio/Shields/Martell.PNG
  • /home/alvaro/Escritorio/Shields/NightsWatch.PNG
  • /home/alvaro/Escritorio/Shields/Stark.PNG
  • /home/alvaro/Escritorio/Shields/Tully.PNG
  • /home/alvaro/Escritorio/Shields/Lannister.PNG
  • /home/alvaro/Escritorio/Shields/Targaryen.PNG
  • /home/alvaro/Escritorio/Shields/Tyrell.PNG

Table will be like this one:

042_got

Such as we’ve done previously, we define the hyperlink settings, indicating that the field will be “Shield” and the action will be “Link to image files”:

043_got

If we check the “Hyperlink” tool, each time that we link on an element of the “Political” layer, an image will appear on a new window with the shield of the corresponding House. At that way, if we press on “The Westerlands” we will see the Lannister shield:

e040_got

And as we pay our debts too, we invite you to read the next post of this peculiar course about GIS.


Filed under: english, gvSIG Desktop, training Tagged: area measure, distance measure, Game of Thrones, hyperlink, Information
Categories: OSGeo Planet

From GIS to Remote Sensing: Brief Introduction to Remote Sensing

OSGeo Planet - Mon, 2017-02-20 09:00
This post is about basic definitions of GIS and Remote Sensing, which are included in the user manual of the Semi-Automatic Classification Plugin.
In particular, the following topics are discussed:
  • Basic Definitions
  • GIS definition
  • Remote Sensing definition
  • Sensors
  • Radiance and Reflectance
  • Spectral Signature
  • Landsat Satellite
  • Sentinel-2 Satellite
  • ASTER Satellite
  • MODIS Products
  • Color Composite
  • Principal Component Analysis
  • Pan-sharpening
  • Spectral Indices
  • Supervised Classification Definitions
  • Land Cover
  • Supervised Classification
  • Training Areas
  • Classes and Macroclasses
  • Classification Algorithms
  • Spectral Distance
  • Classification Result
  • Accuracy Assessment
  • Image conversion to reflectance
  • Radiance at the Sensor’s Aperture
  • Top Of Atmosphere (TOA) Reflectance
  • Surface Reflectance
  • DOS1 Correction
  • Conversion to Temperature
  • Conversion to At-Satellite Brightness Temperature
  • Estimation of Land Surface Temperature

Categories: OSGeo Planet

gvSIG Team: El Atlas de Expansión Urbana se presenta en el Ateneo de Valencia

OSGeo Planet - Mon, 2017-02-20 08:53

El próximo miércoles 8 de marzo en el Ateneo Mercantil de Valencia nuestro compañero Manuel Madrid presentará “El Atlas de Expansión Urbana” dentro de las actividades organizadas por el colectivo “Amigos del Mapa”. Proyecto en el que la Asociación gvSIG ha participado junto a UN-Habitat y la Universidad de Nueva York.

Si tenéis la oportunidad de asistir no la dejéis pasar. Las conclusiones de dicho trabajo son esclarecedoras en relación a como se están expandiendo nuestras ciudades y la problemática derivada de dicha expansión.

ateneo_gvsig_def


Filed under: events, Projects, spanish Tagged: Análisis, expansión urbana, urbanismo
Categories: OSGeo Planet

gvSIG Team: Aprendiendo SIG con Juego de Tronos (IX): Exportar Vista a imagen

OSGeo Planet - Mon, 2017-02-20 08:40

En gvSIG hay herramientas para diseñar planos más o menos complejos, pero hay muchos casos en que necesitamos tener una imagen rápida del encuadre de una Vista de gvSIG y no necesitamos nada más; por ejemplo para utilizar esa imagen en un documento que estemos redactando.

Hoy vamos a ver una herramienta muy sencilla pero muy útil cuando queremos tener una imagen inmediata de nuestra Vista.

Para ejecutarla simplemente debemos ir al menú “Vista/Exportar/Exportar Vista a imagen”. Nos aparecerá una nueva ventana donde simplemente indicaremos donde queremos guardar el fichero de imagen y en que formato (jpg, png, bmp o tiff).072_got

Una herramienta sencilla y útil, y muchas veces desconocida por los usuarios de gvSIG.


Filed under: gvSIG Desktop, spanish, training Tagged: Captura pantalla, Exportar, Imagen, Juego de tronos
Categories: OSGeo Planet

Geomatic Blog: Aggregating points: JSON on SQL and loops on infowindows

OSGeo Planet - Mon, 2017-02-20 07:00

NOTE: I’ll use CARTO but you can apply all this to any webmapping technology backed by a modern database.

Get all the data

So we start with the typical use case where we have a one to many relationship like this:

select e.cartodb_id, e.displayname, e.division, e.photourl, l.cartodb_id as locaction_id, l.location, l.the_geom_webmercator from locations l inner join employees e on e.location = l.location order by location

Easy peasy, we have a map with many stacked points. From here you can jump to this excellent post by James Milner about dense point maps. My example is not about having thousands of scattered points that at certain zoom levels overlap. Mine is a small set of locations but many points “stacking” on them. In this case you can do two things: aggregate or not. When you aggregate you pay a prize for readability: reducing all your data to those locations and maybe using visual variables to show counts or averages or any other aggregated value and finally try to use the interactivity of your map to complete the picture.

So at this point we have something like this map, no aggregation yet, but using transparency we can see where CARTO has many employees. We could also use a composite operation instead of transparency to modify the color of the stacked points.

Stacking points using transparency

Stacking points using transparency

Aggregate and count

OK, let’s do a GROUP BY the geometry and an aggregation like counting. At least now we know how many people are there but that’s all, we loose the rest of the details.

select l.the_geom_webmercator, min(e.cartodb_id) as cartodb_id, count(1) as counts from locations l inner join employees e on e.location = l.location group by l.the_geom_webmercator Grouping by location and counting

Grouping by location and counting

Aggregate one field

But in my case, with CARTO we have PostgreSQL at hand so we can do way more than that. PostgreSQL has many many cool features, handling JSON types is one of them. Mix that with the fact that almost all template systems for front-end applications allow you to iterate over JavaScript Objects and you have a winner here.

So we can combine the json_agg function with MustacheJS iteration over objects to allow rendering the names of our employees.

select l.the_geom_webmercator, min(e.cartodb_id) as cartodb_id, l.location, json_agg(e.firstname) as names, -- JSON aggregation count(1) as counts from locations l inner join employees e on e.location = l.location group by l.the_geom_webmercator,l.location

And this bit of HTML and Mustache template to create a list of employees we can add to the infowindow template:

<ul style="margin:1em;list-style-type: disc;max-height:10em;"> {{#names}}<li class="CDB-infowindow-title">{{.}}</li>{{/names}} </ul> {{^names}}loading...{{/names}}

List of employees on the infowindow

We could do this without JSON types, composing all the markup in the SQL statement but that’s generating quite a lot of content to move to the frontend and of course making the whole thing way harder to maintain.

Aggregate several fields

At this point we can repeat the same function for the rest of the fields but we need to iterate them separatedly. It’d be way better if we could create JSON objects with all the content we want to maintain in a single output field we could iterate on our infowindow. With PostgreSQL we can do this with the row_to_json function and nesting an inner double SELECT to give the properties names. We can use directly row_to_json(row(field1,field2,..)) but then our output fields would have generic names.

select l.the_geom_webmercator, min(e.cartodb_id) as cartodb_id, l.location, count(1) as counts, json_agg(row_to_json(( SELECT r FROM ( SELECT photourl as photo, coalesce(preferredname,firstname,'') as name ) r ),true)) as data from solutions.bamboo_locations l inner join solutions.bamboo_employees e on e.location = l.location group by l.the_geom_webmercator,l.location order by counts asc

With this query now we have a data field with an array of objects with the display name and web address for the employee picture. Easy now to compose this in a simple infowindow where you can see the faces and names of my colleagues.

<div style="column-count:3;"> {{#data}} <span style="display:inline-block;margin-bottom:5px;"> <img style="height:35px;" src="{{photo}}"/> <br/> <span style="font-size:0.55em;">{{name}}</span> </span> {{/data}} </div> {{^data}} loading... {{/data}}

Adding pictures and names

That’s it. You can do even more if you retrieve all the data directly from your database and render on the frontend, for example if you use D3 you probably can do fancy symbolizations and interactions.

One final note is that if you use UTF grids (like in these maps with CARTO) you need to be conservative with the amount of content you put on your interactivity because with medium and big datasets this can make your maps slow and too heavy for the front-end. On those cases you may want to change to an interactivity that works like WMS GetFeatureInfo workflow, where you retrieve the information directly from the backend when the user clicks on the map, instead of retrieving everything when loading your tiles.

Check the map below and how the interactions show the aggregated contents. What do you think of this technique? Any other procedure to display aggregated data that you think is more effective?


Filed under: CARTO, cartography, GIS, SQL, webmapping
Categories: OSGeo Planet

Ivan Minčík: Unique jobs at Land Information New Zealand

OSGeo Planet - Sun, 2017-02-19 23:58
I am working at Land Information New Zealand (LINZ) for almost one year now and my finding is that New Zealand is so unique country, unlike any other. Over the time, I have slowly realized, that LINZ and especially the part where I am working - Location Information, is also very positively unique with it's culture and people.
There is a giant area of land and sea we are caring about, stretching from New Zealand over South-West Pacific to Antarctica. We are running unique free data publishing service. We are using and contributing to lot of Open Source software like QGIS, PostGIS, GDAL, Python and Linux. We have unique managers doing Debian packaging. We are singing Maori songs every Friday. We have numerous running clubs. We are facing unique nature challenges and Kiwis are still the most optimistic people around the globe.




Great news is, that if you want to know what I am talking about, there is a unique opportunity. We are hiring two very interesting positions - DevOps Database Developer and Spatial IT Solutions Developer.

Have a look and send us your CV.
Categories: OSGeo Planet

QGIS Blog: QGIS Grants #2: Call for Grant Proposals 2017

OSGeo Planet - Sun, 2017-02-19 19:03

Dear QGIS Community

Last year we held our first ever call for Grant Proposals and it was a great success. If you are an early adopter using QGIS 3.0 preview builds, you can already try out some of the new capabilities that have arrived in QGIS thanks to these grants.

We are very pleased to announce the second round of Grants is now available to QGIS Contributors. The deadline for this round is Sunday, 19 March 2017. All the details for the Grant are described in the application form, and for more context we encourage you to also read these articles:

We look forward to seeing all your great ideas about how to improve QGIS!

Tim Sutton

QGIS Project Chair


Categories: OSGeo Planet

Paul Ramsey: Super Expensive Cerner Crack-up at Island Health

OSGeo Planet - Sun, 2017-02-19 16:00

Kansas City, we have a problem.

Super Expensive Cerner Crack-up at Island Health

A year after roll-out, the Island Health electronic health record (EHR) project being piloted at Nanaimo Regional General Hospital (NRGH) is abandoning electronic processes and returning to pen and paper. An alert reader forwarded me this note from the Island Health CEO, sent out Friday afternoon:

The Nanaimo Medical Staff Association Executive has requested that the CPOE tools be suspended while improvements are made. An Island Health Board meeting was held yesterday to discuss the path forward. The Board and Executive take the concerns raised by the Medical Staff Association seriously, and recognize the need to have the commitment and confidence of the physician community in using advanced EHR tools such as CPE. We will engage the NRGH physicians and staff on a plan to start taking steps to cease use of the CPOE tools and associated processes. Any plan will be implemented in a safe and thoughtful way with patient care and safety as a focus.
Dr. Brendan Carr to all Staff/Physicians at Nanaimo Regional General Hospital

This extremely expensive back-tracking comes after a year of struggles between the Health Authority and the staff and physicians at NRGH.

After two years of development and testing, the system was rolled out on March 19, 2016. Within a couple months, staff had moved beyond internal griping to griping to the media and attempting to force changes through bad publicity.

Doctors at Nanaimo Regional Hospital say a new paperless health record system isn’t getting any easier to use.

They say the system is cumbersome, prone to inputting errors, and has led to problems with medication orders.

“There continue to be reports daily of problems that are identified,” said Dr. David Forrest, president of the Medical Staff Association at the hospital.
– CBC News, July 7, 2016

Some of the early problems were undoubtedly of the “critical fault between chair and keyboard” variety – any new information interface quickly exposes how much we use our mental muscle memory to navigate both computer interfaces and paper forms.

IHealth Terminal & Trainer

So naturally, the Health Authority stuck to their guns, hoping to wait out the learning process. Unfortunately for them, the system appears to have been so poorly put together that no amount of user acclimatization can save it in the current form.

An independent review of the system in November 2016 has turned up not just user learning issues, but critical functional deficiencies:

  • High doses of medication can be ordered and could be administered. Using processes available to any user, a prescriber can inadvertently write an order for an unsafe dose of a medication.
  • Multiple orders for high-risk medications remain active on the medication administration record resulting in the possibility of unintended overdosing.
  • The IHealth system makes extensive use of small font sizes, long lists of items in drop-down menus and lacks filtering for some lists. The information display is dense making it hard to read and navigate.
  • End users report that challenges commonly occur with: system responsiveness, log-in when changing computers, unexplained screen freezes and bar code reader connectivity
  • PharmaNet integration is not effective and adds to the burden of medication reconciliation.

The Health Authority committed to address the concerns of the report, but evidently the hospital staff felt they could no longer risk patient health while waiting for the improvements to land. Hence a very expensive back-track to paper processes, and then another expensive roll-out process in the future.

This set-back will undoubtedly cost millions. The EHR roll-out was supposed to proceed smoothly from NRGH to the rest of the facilities in Island Health before the end of 2016.

This new functionality will first be implemented at the NRGH core campus, Dufferin Place and Oceanside Urgent Care on March 19, 2016. The remaining community sites and programs in Geography 2 and all of Geography 1 will follow approximately 6 months later. The rest of Island Health (Geographies 3 and 4) will go-live roughly 3 to 6 months after that.

Clearly that schedule is no longer operative.

The failure of this particular system is deeply worrying because it is a failure on the part of a vendor, Cerner, that is now the primary provider of EHR technology to the BC health system.

Cerner

When the IBM-led EHR project at PHSA and Coastal Health was “reset” (after spending $72M) by Minister Terry Lake in 2015, the government fired IBM and turned to a vendor they hoped would be more reliable: EHR software maker Cerner.

Cerner was already leading the Island Health project, which at that point (mid-2015) was apparently heading to a successful on-time roll-out in Nanaimo. They seemed like a safe bet. They had more direct experience with the EHR software, since they wrote it. They were a health specialist firm, not a consulting generalist firm.

For all my concerns about failures in enterprise IT, I would have bet on Cerner turning out a successful if very, very, very costly system. There’s a lot of strength in having relevant domain experience: it provides focus and a deep store of best practices to fall back on. And as a specialist in EHR, a failed EHR project will injure Cerner’s reputation in ways a single failed project will barely dent IBM’s clout.

There will be a lot of finger-pointing and blame shifting going on at Island Health and the Ministry over the next few months. The government should not be afraid to point fingers at Cerner and force them to cough up some dollars for this failure. If Cerner doesn’t want to wear this failure, if they want to be seen as a true “partner” in this project, they need to buck up.

Cerner will want to blame the end users. But when data entry takes twice as long as paper processes, that’s not end users’ fault. When screens are built with piles of non-relevant fields, and poor layouts, that’s not end users’ fault. When systems are slow or unreliable, that’s not end users’ fault.

Congratulations British Columbia, on your latest non-working enterprise IT project. The only solace I can provide is that eventually it will probably work, albeit some years later and at several times the price you might have considered “reasonable”.

Categories: OSGeo Planet

longwayaround.org.uk: Postgres Information Functions

OSGeo Planet - Sat, 2017-02-18 07:55

Postgres contains a wealth of functions that provide information about a database and the objects within. The System Information Functions of the official documention provides a full list. There are a huge number of functions covering a whole host of info from the current database session, privileges, function properties.

Examples Find an objects oid

A lot of the info functions accept the Object Identifier Type for objects in the database. This can be obtained by casting to regclass (also described in the oid docs) then to oid:

select 'schema_name.relation_name'::regclass::oid;

Where relation_name is a table, view, index etc.

View definition select pg_get_viewdef('schema_name.view_name'::regclass::oid);

Or in psql you can use one of the built in commands:

\d+ schema_name.view_name Function definition

Returns the function definition for a given function. Many built-in functions don't reveal much due to them not being written in SQL but for those that are you'll get the complete create function statement. For example to view the definition of the PostGIS st_colormap function:

select pg_get_functiondef('st_colormap(raster, integer, text, text)'::regprocedure); Privileges

A whole host of functions exist to determine privileges for schemas, tables, functions etc. Some examples:

Determine if the current users can select from a table:

select has_table_privilege('schema_name.relation_name', 'select');

Note: The docs state that "multiple privilege types can be listed separated by commas, in which case the result will be true if any of the listed privileges is held". This means that in order to test a number of privileges it is normally better to test each privilege individually as select has_table_privilege('schema_name.relation_name', 'select,update'); would return t even if only select is supported.

Determine if a user can use a schema:

select has_schema_privilege('schema_name', 'usage');
Categories: OSGeo Planet

Jackie Ng: React-ing to the need for a modern MapGuide viewer (Part 11): I don't say this in jest

OSGeo Planet - Fri, 2017-02-17 13:01
... but seriously, Jest completes my holy trinity of web front-end development nirvana.

  • React, because of its high performance and revolutionary component-based way of building frontend UIs. I could never go back to jQuery, data-binding, string templating and those other primitive ways of building frontends.
  • TypeScript, because it is in my opinion, the only sane programming language for frontend development. Just like jQuery was the glue that held together inconsistent browser APIs for many years, TypeScript the the glue that lets us play with future JS technologies in the present. TypeScript is JavaScript with C#-quality static typing. I love that a whole class of errors are eliminated through a simple compile step. I can't fathom having to maintain large code bases in a dynamically-typed language. TypeScript brings order and sanity in that regard. And with TypeScript 2.0, I don't have to deal with billion dollar mistakes.
  • And finally, Jest which I believe to be the total package for JavaScript unit testing that is sooooooo easy to set up! Code coverage is also included.
Before I tried Jest, the current unit test suite for mapguide-react-layout was a convoluted stack of:I also tried to get code coverage working. But this required a tool called istanbul and because my code was TypeScript, it needed some TypeScript source map plugin for istanbul to recognise it, which resulted in some rube-goldberg-esque contraption that formed the foundation of my unit test suite, that didn't even get the coverage right! So I scrapped the code coverage part and just coasted along with karma/mocha/chai combo until now.
With the introduction of Jest, it does the job of karma/mocha/chai/istanbul in a single, easy to install package. Only some small changes to my unit test suite was required (porting chai assertions to their jest equivalents) and the whole test suite was passing. With a simple addition of a --coverage flag to jest, it then automagically generates code coverage results and reports.


Since I am now getting code coverage reports, the obvious next step was to upload this to the coveralls.io service. It turns out TravisCI already supports automatic upload of code coverage reports to coveralls. It just needed installing node-coveralls and piping the jest coverage output to it.
And with that, I get another shiny badge to brandish on the project home page

This is almost too easy. Results like this easily incentivize you to write good quality code.
One last thing before I close out this post. The 63% coverage is a bit of a misnomer. It turns out that it is actually the percentage of code in modules that your unit tests are currently testing, which makes sense. The moment I start bringing in other components and classes under test, I expect this percentage to plummet, which is merely incentive to write more tests to bump it back up.
Categories: OSGeo Planet
Syndicate content