OSGeo Planet

Paulo van Breugel: GRASS GIS Jupyter notebooks

OSGeo Planet - Sun, 2017-10-22 09:43
A great source of information about GRASS GIS is the GRASS Wiki. One example is this list with GRASS GIS Jupyter notebooks which was just added by Markus Neteler (no introduction needed I guess). There are some really nice tutorials there, which alone is reason enough to check out this list. I have been using …

Continue reading GRASS GIS Jupyter notebooks

Categories: OSGeo Planet

Ian Turton's Blog: Adding a .prj file to existing data files

OSGeo Planet - Fri, 2017-10-20 00:00

While teaching a GeoServer course recently, we were trying to add a collection of tif and world files to GeoServer as an image mosaic. But the operation kept failing as GeoServer was unable to work out the projection of the files.

This problem can be avoided by adding a .prj file to the tif file to help GeoServer out. However we had hundreds of files and a certain national mapping agency had just assumed that everyone knew its files were in EPSG:27700.

Later, I worked up a quick solution to this problem. GeoTools is capable of writing out a WKT representation of a projection and Java has no problem walking a directory tree matching a regular expression.

Getting the WKT of a projection is trivial:

CoordinateReferenceSystem crs = CRS.decode("epsg:27700"); String wkt = crs.toWKT();

Walking the directory tree was a little trickier but uses an anonymous method of the Files class walkFileTree

public static ArrayList<File> match(String glob, String location) throws IOException { ArrayList<File> ret = new ArrayList<>(); final PathMatcher pathMatcher = FileSystems.getDefault().getPathMatcher("glob:**/" + glob); Files.walkFileTree(Paths.get(location), new SimpleFileVisitor<Path>() { @Override public FileVisitResult visitFile(Path path, BasicFileAttributes attrs) throws IOException { if (pathMatcher.matches(path)) { ret.add(path.toFile()); } return FileVisitResult.CONTINUE; } @Override public FileVisitResult visitFileFailed(Path file, IOException exc) throws IOException { return FileVisitResult.CONTINUE; } }); return ret; }

The full code can be found in this snippet. The usage is pretty simple to just add a .prj file to a single file (say a shapefile):

java AddProj epsg:27700 file.shp

Or to deal with a whole directory

java AddProj epsg:27700 /data/os-data/rasters/streetview/*.tif

Which adds a .prj file to all the .tif files in that directory and all subdirectories.

Obviously you can use other EPSG codes if your data supplier assumes that everyone knows their projection is the only one in the world.

Categories: OSGeo Planet

Oslandia: Auxiliary Storage support in QGIS 3

OSGeo Planet - Thu, 2017-10-19 12:50

For those who know how powerful QGIS can be using data defined widgets and expressions almost anywhere in styling and labeling settings, it remains today quite complex to store custom data.

For instance, moving a simple label using the label toolbar is not straightforward, that wonderful toolbar remains desperately greyed-out for manual labeling tweaks

…unless you do the following:

  • Set your vector layer editable (yes, it’s not possible with readonly data)
  • Add two columns in your data
  • Link the X property position to a column and the Y position to another

 

the Move Label map tool becomes available and ready to be used (while your layer is editable). Then, if you move a label, the underlying data is modified to store the position. But what happened if you want to fully use the Change Label map tool (color, size, style, and so on)?

 

Well… You just have to add a new column for each property you want to manage. No need to tell you that it’s not very convenient to use or even impossible when your data administrator has set your data in readonly mode…

A plugin, made some years ago named EasyCustomLabeling was made to address that issue. But it kept being full of caveats, like a dependency to another plugin (Memory layer saver) for persistence, or a full copy of the layer to label inside a memory layer which indeed led to loose synchronisation with the source layer.

Two years ago, the French Agence de l’eau Adour Garonne (a water basin agency) and the Ministry in charge of Ecology asked Oslandia to think out QGIS Enhancement proposals to port that plugin into QGIS core, among a few other things like labeling connectors or curved labels enhancements.

Those QEPs were accepted and we could work on the real implementation, so here we are, Auxiliary storage has now landed in master!

How

The aim of auxiliary storage is to propose a more integrated solution to manage these data defined properties :

  • Easy to use (one click)
  • Transparent for the user (map tools always available by default when labeling is activated)
  • Do not update the underlying data (it should work even when the layer is not editable)
  • Keep in sync with the datasource (as much as possible)
  • Store this data along or inside the project file

As said above, thanks to the Auxiliary Storage mechanism, map tools like Move Label, Rotate Label or Change Label are available by default. Then, when the user select the map tool to move a label and click for the first time on the map, a simple question is asked allowing to select a primary key :

Primary key choice dialog – (YES, you NEED a primary key for any data management)

From that moment on, a hidden table is transparently created to store all data defined values (positions, rotations, …) and joined to the original layer thanks to the primary key previously selected. When you move a label, the corresponding property is automatically created in the auxiliary layer. This way, the original data is not modified but only the joined auxiliary layer!

A new tab has been added in vector layer properties to manage the Auxiliary Storage mechanism. You can retrieve, clean up, export or create new properties from there :

Where the auxiliary data is really saved between projects?

We end up in using a light SQLite database which, by default, is just 8 Ko! When you save your project with the usual extension .qgs, the SQLite database is saved at the same location but with a different extension : .qgd.

Two thoughts with that choice: 

  • “Hey, I would like to store geometries, why no spatialite instead? “

Good point. We tried that at start in fact. But spatialite database initializing process using QGIS spatialite provider was found too long, really long. And a raw spatialite table weight about 4 Mo, because of the huge spatial reference system table, the numerous spatial functions and metadata tables. We chose to fall back onto using sqlite through OGR provider and it proved to be fast and stable enough. If some day, we achieve in merging spatialite provider and GDAL-OGR spatialite provider, with options to only create necessary SRS and functions, that would open news possibilities, like storing spatial auxiliary data.

  • “Does that mean that when you want to move/share a QGIS project, you have to manually manage these 2 files to keep them in the same location?!”

True, and dangerous isn’t it? Users often forgot auxiliary files with EasyCustomLabeling plugin.  Hence, we created a new format allowing to zip several files : .qgz.  Using that format, the SQLite database project.qgd and the regular project.qgs file will be embedded in a single project.zip file. WIN!!

Changing the project file format so that it can embed, data, fonts, svg was a long standing feature. So now we have a format available for self hosted QGIS project. Plugins like offline editing, Qconsolidate and other similar that aim at making it easy to export a portable GIS database could take profit of that new storage container.

Now, some work remains to add labeling connectors capabilities,  allow user to draw labeling paths by hand. If you’re interested in making this happen, please contact us!

 

 

More information

A full video showing auxiliary storage capabilities:

 

QEP: https://github.com/qgis/QGIS-Enhancement-Proposals/issues/27

PR New Zip format: https://github.com/qgis/QGIS/pull/4845

PR Editable Joined layers: https://github.com/qgis/QGIS/pull/4913

PR Auxiliary Storage: https://github.com/qgis/QGIS/pull/5086

Categories: OSGeo Planet

gvSIG Team: SIG aplicado a Gestión Municipal: Módulo 5.3 ‘Servicios web (Servicios no estándares)’

OSGeo Planet - Thu, 2017-10-19 10:49

Ya está disponible el tercer vídeo del quinto módulo, en el que hablaremos de cómo trabajar con servicios web que no siguen los estándares OGC en gvSIG Desktop, pero que nos pueden servir para complementar nuestros mapas con capas diferentes.

Entre los servicios disponibles tenemos el de OpenStreetMap, con el que tenemos acceso a varias capas, tanto de callejeros, como de cartografía náutica o de ferrocarriles, o cartografía con diferentes tonalidades que nos pueden servir como cartografía de referencia en nuestro mapa.

Otros servicios disponibles son los de Google Maps y de Bing Maps, donde podemos cargar distintas capas.

El requisito para poder cargar estas capas hasta la versión 2.4 inclusive es que debemos tener la vista en el sistema de referencia EPSG 3857, un sistema propio que utilizan dichos servicios.

Aparte, para poder cargar las capas de Bing Maps necesitaremos obtener previamente una clave, que podemos obtener como se cuenta en el vídeo.

Una ver cargados podemos reproyectar a dicho sistema nuestras capas. Además muchos servicios web OGC, como WMS, WFS…, ofrecen sus capas en dicho sistema de referencia, por lo que podemos superponerlas a ellas.

El tercer vídeo-tutorial de este quinto módulo es el siguiente:

Post relacionados:


Filed under: gvSIG Desktop, IDE, spanish, training Tagged: ayuntamientos, Bing Maps, gestión municipal, Google Maps, OpenStreetMap, OSM, Servicios web
Categories: OSGeo Planet

gvSIG Team: Concurso gvSIG Batovi: premiación

OSGeo Planet - Wed, 2017-10-18 15:20

gvSIG Batovi

Ha culminado el concurso Proyectos de trabajo con estudiantes y gvSIG Batoví. Esta muy gratificante y enriquecedora primera experiencia para Uruguay resultó todo un desafío, desde el punto de vista organizativo, de planificación y coordinación. Pero podemos afirmar -con modestia y sencillez pero también con convencimiento- que ha resultado todo un éxito.

Este concurso buscaba incentivar el uso de gvSIG Batoví en proyectos concretos. Fue una iniciativa del Ministerio de Transporte y Obras Públicas (en especial la Dirección Nacional de Topografía), en coordinación con el Consejo de Educación Secundaria de la Administración Nacional de Educación Pública -ANEP-CES- (en especial la Inspección Nacional de Geografía) y el Centro Ceibal (en especial el Área de Contenidos y LabTeD -Laboratorios de Tecnologías Digitales-).

Los grupos postulados (integrados por estudiantes y docentes de Geografía y otras disciplinas de Enseñanza Secundaria del sistema público de educación de todo el país) contaron con el seguimiento…

View original post 402 more words


Filed under: gvSIG Desktop
Categories: OSGeo Planet

Jackie Ng: The journey of porting the MapGuide Maestro API to .net standard

OSGeo Planet - Wed, 2017-10-18 14:37
So what prompted the push to port the MapGuide Maestro API to .net standard was Microsoft recently releasing a whole slate of developer goodies:
Of particular relevance to this subject of this post, is .net standard 2.0.
For those who don't know, .net standard is (you guessed it) a versioned standard by which one can write portable and cross-platform class libraries against that will work in any .net runtime environment that supports the version of .net standard that you are targeting. If you do Android development, this is similar to API levels.
.net standard is of interest to me as the MapGuide Maestro API at the moment is a set of class libraries that target the full .net Framework. Having it target .net standard instead would give us guaranteed cross-platform portability across .net runtime environments that support .net standard (Mono) and/or supporting platforms that would never have been possible before in the past (.net Core/Xamarin/UWP)
I tried an attempt at porting the Maestro API to earlier versions of .net standard, with mixed success:
  • The ObjectModels library was able to be ported to .net standard 1.6, but required installing many piecemeal System.* nuget packages to fill in the missing APIs.
  • Maestro API itself could not be ported due to reliance of XML schema functionality and HttpWebRequest, that no version of .net standard before 2.0 supported.
  • Maestro API had upstream dependencies (eg. NetTopologySuite) that were not ported to .net standard.
  • More importantly, the bits I were able to port across (ObjectModels), I couldn't run their respective (full-framework) unit test libraries from the VS test explorer due to cryptic assembly loading errors due to the assembly manifest of the various piecemeal System.* assemblies not matching their assembly reference. With no way to run these tests, the porting effort wasn't worth continuing.
Around this time, I heard of what the upcoming (at the time) .net standard 2.0 would bring to the table:
  • Over 2x the API surface of netstandard1.6, including key missing APIs needed by the Maestro API like the XML schema APIs and HttpWebRequest
  • A compatibility mode for full .net Framework. If this works as hoped, it means we can skip waiting on upstream dependencies like NetTopologySuite and friends needing to have netstandard-compatible ports and use the existing versions as-is.
Given the compelling points of .net standard 2.0 and mixed results with porting to the (then) current iteration on .net standard, I decided to put these porting efforts on ice and wait until the time when .net standard 2.0 and its supporting tooling comes out.

Now that .net standard 2.0 and supporting tooling came out, it was time to give this porting effort another try ... and I could not believe how much less painful the whole process was! This was basically all I had to do to port the following libraries to .net standard 2.0:

Preparation Work

To be able to use our (ported to .net standard 2.0) MaestroAPI in the (full framework) Maestro windows application, we needed to first re-target all affected project files to target .net Framework 4.6.1, as this is the minimal version of the full .net framework that supports .net standard 2.0

OSGeo.FDO.Expressions

This is a class library that uses the Irony grammar parser to parse FDO expression strings to an object oriented form. Maestro uses this library to be able to analyze FDO expressions for validation purposes (eg. You don't have a FDO expression that references a property that doesn't exist).

My process of converting the existing full framework csproj file to .net standard was to basically just replace the entire contents of the original csproj file with the minimum required content for a .net standard 2.0 class library.


1
2
3
4
5<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
</PropertyGroup>
</Project>

That's right, the content of a minimal .net standard 2.0 class library is just 5 lines of XML! All .cs files are implicitly included now when building this project, which greatly contributes to the simplicity of the new csproj format.

Now obviously this project file as-is won't compile as we need to reference Irony and use VS2017 to regenerate the resx string bundles and source link shared assembly info files. After those changes were made, the project builds with the only notable warning being NU1701, which is the warning emitted by the new tooling when we reference full framework libraries/packages from a netstandard2.0 class library (that the new tooling allows us to do for compatibility purposes).

It was around this time that I discovered that someone has made a netstandard-compatible port of Irony, so we replaced the existing Irony reference with the netstandard-compatible port instead. This library was now fully ported across.

ObjectModels

This is the class library that describes all of our XML resources in MapGuide as strongly-typed classes with full XML (de)serialization support to and from both forms at various schema versions.

The original porting attempt targeted netstandard 1.6. While this was mostly painless, I had to reference tons of piecemeal System.* nuget packages, which then flowed down to anything that was referencing it.

For this attempt, we target .net standard 2.0 using the same technique of pasting a minimal netstandard2.0 class library template into the existing csproj file. Like the previous attempt, building this project failed due to dependencies on System.Drawing as a result of usages of System.Drawing.Font. Further analysis shows that we were using Font as a glorified DTO. So it was just a case of adding a new type that carried the same properties we were capturing with the System.Drawing.Font objects that were being passed around.

Due to referencing the NETStandard.Library metapackage by default, this attempt did not require referencing piecemeal System.* nuget packages like the previous attempts. So that's another library ported across.

MaestroAPI

Now for the main event. Maestro API needed to be netstandard-compatible otherwise this whole porting effort is a waste. The previous attempt (to target netstandard1.6) was cut short as APIs such as XML Schema support was not there. For .net standard 2.0, these missing APIs are back, so porting across MaestroAPI should be a much simpler affair.

And indeed it was.

Just like the ObjectModels porting effort, we hit some snags around references to System.Drawing. Unlike ObjectModels, we were using full blown Images and Bitmaps from System.Drawing and not things like Fonts which we were just using to sling font information around.

To address this problem a new full framework library (OSGeo.MapGuide.MaestroAPI.FxBridge) was introduced where classes that were using these incompatible types were relocated to. There was also service interfaces that returned System.Drawing.Image objects (IMappingService). These APIs have been modified to return raw System.IO.Stream objects instead, with the FxBridge library providing extension methods to "polyfill" in the old APIs that returned images. Thus, code that used these affected APIs can just reference the FxBridge library in addition to MaestroAPI and be able to work as before.

After sectioning off these incompatible types to the FxBridge library, the next potential roadblock in our porting efforts was our upstream dependencies. In particular, we were using NetTopologySuite, GeoAPI and Proj.NET to give Maestro API a strongly-typed geometry model and some basic coordinate system transformation capabilities. These were all full framework packages, meaning our previous porting attempt (to target netstandard1.6) was stopped in its tracks.

Because netstandard2.0 has a full-framework compatibility shim, we were able to reference these existing packages with the standard NU1701 compatibility warnings spat out by NuGet. However, since the previous porting attempt, the authors of NetTopologySuite, GeoAPI and Proj.NET have released netstandard-compatible (albeit prerelease) versions of their respective libraries, so as a result we were able to fully netstandard-ify all our dependencies as well.

However, we had to turn off strong naming of our assembly in the process because our upstream dependencies did not offer strong-named netstandard assemblies.

And with that, the Maestro API was ported to .net standard 2.0

MaestroAPI HTTP Provider

However, the Maestro API would not be useful without a functional HTTP provider to communicate with the mapagent. So this library also needed to be netstandard-compatible.

The previous porting attempt (to netstandard1.6) was roadblocked because the HTTP provider uses HttpWebRequest to communicate with the mapagent. While we could have just replaced HttpWebRequest with the newer HttpClient, that would require a full async/await-ification of the whole code base and then having to deal properly with the leaky abstractions known as SynchronizationContext and ConfigureAwait to ensure our async/await-ified HTTP provider is usable in both ASP.net and desktop windows application contexts without it deadlocking on one or the other.

While having a fully async HTTP provider is good, I wanted to have a functional one first before undertaking the task of async/await-ifying it. The development effort involved was such that it was better to just wait for .net standard 2.0 to arrive (where HttpWebRequest was supported) than to try to modify the HTTP provider to use HttpClient.

And just like the porting of the ObjectModels/MaestroAPI projects, this was a case of taking the existing csproj file, replacing the contents with the minimal netstandard class library template and manually adding in the required references and various settings until the project built again.

Caught in a snag

So all the key parts of the Maestro API have been ported across to .net standard 2.0 and the code all builds, so now it was time to run our unit tests to make sure everything was still green.

All green they were indeed. All good! Now to run the thing.

Most things seemed to work until I validated a Map Definition and got this message.



Assembly manifest what? I have no idea! This error is also thrown when I use any part of the MaestroAPI that uses NetTopologySuite -> GeoAPI.

My first port of call was to look at this known issue and try all the workarounds listed:
  • Force all our projects to use PackageReferences mode for installing/restoring nuget packages
  • Enable automatic binding redirect generation on all executable projects
After trying these workarounds, the assembly manifest errors still persisted. At this point I was stuck and was on the verge of giving up on this porting effort until some part of my brain told me to take a look at the assemblies that were in the output directory.
Since the error in question referred to GeoAPI.dll, I'd thought I'd crack that assembly open in ILSpy and see what interesting information I could find about this assembly.


Well this was certainly most interesting! Why is a full-framework GeoAPI.dll being copied out? The only direct consumer of GeoAPI (OSGeo.MapGuide.MaestroAPI.dll) is netstandard2.0, and it is referencing the netstandard target of GeoAPI.

Here's a diagram of what I was expecting to see:



After digging around some more it appears from observation that there is a bug (or is it feature?) in MSBuild where given a nuget package that offers both netstandard and full-framework targets, it will prefer the full-framework target over the netstandard one. This means in the case of GeoAPI, because our root application is a full-framework one, MSBuild chose the full-framework target offered by GeoAPI instead of the netstandard one.
So what's the assembly manifest error all about? The FusionLog property of the exception reveals the answer.


GeoAPI is strong-named for full-framework. GeoAPI is not strong-named for netstandard. The assembly manifest error is because our netstandard-targeting MaestroAPI references the netstandard target of GeoAPI (not strong-named), but because our root application is a full-framework one, MSBuild gave us a full-framework GeoAPI assembly instead. At runtime, .net could not reconcile that a strong-named GeoAPI was being loaded when our netstandard-targeting MaestroAPI was references the netstandard GeoAPI that is not strong named. Hence the assembly manifest error.
Multi-targeting for the ... win?

Okay, so now we know why it's happening, what can we do about it? Well, the other major thing that the new MSBuild and csproj file format gives us is the ability to easily multi-target the project for different frameworks and runtimes.

By changing the TargetFramework element in our project to TargetFrameworks (plural) and specifying a semi-colon-delimited list of TFMs, we now have a class library that can build for each one of the TFMs specified.

For example, a netstandard 2.0 class library like this:

1
2
3
4
5<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>netstandard2.0</TargetFramework>
</PropertyGroup>
</Project>

Can be made to multi-target like this:

1
2
3
4
5<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFrameworks>netstandard2.0;net461</TargetFrameworks>
</PropertyGroup>
</Project>

If MSBuild insists on giving us full-framework dependencies if given the choice between full-framework and netstandard (when both are compatible), then the solution is to basically multi-target the MaestroAPI class library so that we offer 2 flavors of the assembly:
  • A full-framework one (net461) that will be selected by MSBuild if the consuming application is a full-framework one.
  • The netstandard one (netstandard2.0) that will be selected by MSBuild if the consuming application is .net Core, Xamarin, etc.
Under this setup MSBuild will choose the full-framework Maestro API over the netstandard one when building the Maestro windows application. Since we're now building for multiple frameworks/runtimes and explictly targeting full-framework again, we can re-activate strong naming on the full-framework (net461) target, ensuring the full-framework dependency chain of MaestroAPI is fully strong-named (as it was before we started this porting effort), and our assembly manifest error goes away when running unit tests and the Maestro application itself whenever we hit functionality that uses GeoAPI/NetTopologySuite.

So the problem is effectively solved, but the whole process feels somewhat anti-climactic.

I mean ... the whole premise of .net standard and why I wanted to port MaestroAPI to target it was the promise of one unified target (an interface if you will) with many supporting runtimes (ie. various implementations of this interface). Target the standard and your class library will work across the supporting runtimes, in theory.

Unfortunately in practice, strong-naming (and MSBuild choosing full-framework targets over netstandard, even if both are compatible) was the leaky abstraction that threw a monkey wrench on this whole concept, especially if some targets are strong-named and some are not. Having to multi-target the Maestro API as a workaround feels unnecessary.

But at the end of the day, we still achieved our goal of a netstandard-compatbile Maestro API that can be used in .net Core, Xamarin, etc. We just had to take a very long detour to get from A to B and all I can think of was: Was this (multi-targeting) absolutely necessary?

Some Changes and Compromises

Although we now have a .net standard and full framework compatible versions of the Maestro API, we have to make some changes and compromises around the developer and acquisition experience for this to work in a cross-platform .net world.

1. For reasons previously stated, we have to disable strong-naming of the Maestro API for the .net standard target. This is brought upon us by our upstream dependencies (the netstandard flavors of GeoAPI and NetTopologySuite), which we can't do anything about. The full framework target however is still strong-named as before.

2. The SDK package in its current form will most likely go away. This is because turning Maestro API into a .net standard library forces us to use nuget packages as the main delivery mechanism, which is a good thing because nobody should be manually referencing assemblies in this day and age for consuming libraries. The tooling now is just so brain-dead simple that we have no excuse to not make nuget packages. No SDK package also means that we can look at alternative means of generating API documentation (docfx looks like a winner), instead of Sandcastle as making CHM files is kind of pointless and the only reason I made CHM files was to bundle it with the SDK package.

The sample code and supporting tools that were previously part of the SDK package will be offloaded to a separate GitHub repository that I'll announce in due course. I'll need to re-think the main ASP.net code sample as well, because the old example required:

  • Manually setting up a web application in local IIS (not IIS Express)
  • Manually referencing a whole bunch of assemblies
  • Needing to run Visual Studio as administrator to debug the code sample due to the local IIS constraint.

These are things that should not be done in 2017!

3. Because nuget packages are the strongly preferred way of consuming libraries, it meant that having the HTTP provider as a separate library just complicates things (having to register this provider in ConnectionProviders.xml and automating it when installing its theoretical nuget package). The Maestro API on its own is pretty useless without the HTTP provider anyways, so in the interest of killing two birds with one stone, the HTTP provider has been integrated into the Maestro API assembly itself. This means that you don't even need ConnectionProviders.xml unless you need to use the (mg-desktop wrapper) local connection provider, or you need to use a (roll your own wrapper around the official MapGuide API) local-native connection provider.

4. The CI machinery needed some adjustments. I couldn't get OpenCover to work against our newly ported netstandard libraries using (dotnet test) as the runner, so I had to momentarily disable the OpenCover instrumentation while the unit tests ran in AppVeyor. But as a result of needing to multi-target MaestroAPI (for reasons already stated), I decided on this CI matrix:

  • Use AppVeyor to run the Maestro API unit tests for the full-framework target on Windows. Because we're running the tests under a full-framework runner, the OpenCover instrumentation can be restored, allowing us to upload code coverage reports again to coveralls.io
  • Use TravisCI to run the Maestro API unit tests for the netstandard target under .net Core 2.0 on Linux. The whole motivation for netstandard-izing MaestroAPI was to get it to run on these non-windows .net platforms, so let TravisCI handle and verify/validate that aspect for us. We have no code coverage stats here, but surely that can't be radically different than the code coverage states had we run the same test suite on Windows with OpenCover instrumentation.
Where to from here?
Now that the porting efforts have been completed, the next milestone release should follow shortly. 
This milestone will probably only concern the application itself as the SDK story needs revising and I don't want that to hold up on a new release of Maestro (the application).
Categories: OSGeo Planet

gvSIG Team: Apertura 13as Jornadas Internacionales de gvSIG

OSGeo Planet - Wed, 2017-10-18 14:29

Buenos días a todas y todos los presentes.

Me gustaría comenzar por agradecer el esfuerzo de toda la gente que han hecho posible que hoy estemos inaugurando las 13as jornadas Internacionales de gvSIG. Este mismo mes se han celebrado las 4as jornadas Mexicanas y las 9as Jornadas LAC en Brasil.

Indicadores, junto a otros como los premios que se han sumado este año, que nos indican que estamos antes un proyecto consolidado, en constante crecimiento y con usuarios en más de 160 países. Ahí es nada.

En lo que llevamos de 2017 gvSIG ha sido reconocido en los ‘Share & Reuse Awards’ por la Comisión Europea como el proyecto de software libre más importante de Europa. Un premio a una trayectoria, lo cuál lo hace aún más relevante. Un galardón que reconoce, y aprovecho que está Vicente aquí para decirlo, más que merecidamente la apuesta de la Generalitat Valenciana por impulsar la geomática con software libre y talento valenciano.

Se suman el premio a la Excelencia en categoría internacional de la Unión Profesional de Valencia. El premio de las Telecomunicaciones Valencianas a la ‘Organización impulsora de las TIC’ y por último y por 3er año consecutivo, el ‘Europa Challenge’ otorgado por la NASA a la mejor solución profesional, la suite gvSIG.

Creo que debemos sentirnos orgullosos todos los que de una u otra forma, en mayor o menor medida, desde las comunidades o desde las organizaciones, estamos/estáis impulsándolo. Siempre lo hemos dicho, gvSIG no es un camino a recorrer, es un camino que construimos juntos.

Por otro lado, ya no hace falta decirlo. Es un hecho asumido. La geomática se ha convertido en una ciencia, en una herramienta fundamental. La modernización de los sistemas de información pasa indudablemente por ella. Por la geolocalización de las TIC.

Y dada su importancia, es de sentido común optar por soluciones que garanticen nuestra independencia, nuestros derechos como usuarios, nuestra libertad para adaptar la tecnología a nuestras necesidades y no al contrario.

También es de sentido común utilizar la tecnología para impulsar nuestra industria y generar empresas altamente especializadas, en un marco de colaboración y conocimiento compartido. Hacer de la economía colaborativa una realidad en un ámbito tan especializado como el de la tecnología.

Este quizá es el objetivo más ambicioso del proyecto, que se materializa en la Asociación gvSIG y en la puesta en marcha de la suite gvSIG, un amplio catálogo de soluciones profesionales sobre las que hablaremos mucho estas jornadas.

Poco más que añadir, disfruten y aprovechen al máximo las jornadas y disfruten también esta maravillosa ciudad los que vienen de fuera.

Muchas gracias y bienvenidos a las 13as Jornadas Internacionales de gvSIG


Filed under: events Tagged: 13as Jornadas gvSIG
Categories: OSGeo Planet

Jackie Ng: A simpler MgCooker tile seeding process

OSGeo Planet - Wed, 2017-10-18 13:03
I don't know if you've ever seen the guts of the tile seeding code used by MgCooker, it isn't the most prettiest of things, but it for the most part works.

Besides some cosmetic restructuring of the code, I haven't really touched this part of Maestro ever.

Consider the history of this tiling code. It originated around 2009. Things we now take for granted like async/await and the Task Parallel Library probably didn't exist around that time, so you had no choice but to dive deep into wait handles, auto-reset events and manual thread management.

If I had to write MgCooker from scratch today, I'd cook up (pun intended) probably something like this

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28using System.Diagnostics;
using System.Threading;
using System.Threading.Tasks;
...
public class TileSeeder
{
public void SeedTiles()
{
List<(int row, int col, int scale)> tiles = ComputeTileRequestList();
int total = tiles.Count;
int rendered = 0;
var sw = new Stopwatch();
sw.Start();

//The magic sauce that takes multi-threads our tile seeding and takes care of all our multi-threading concerns!
Parallel.ForEach(tiles, (tile) =>
{
//Send a HTTP request to GETTILE mapagent API with tile.row, tile.col and tile.scale
...
Interlocked.Increment(ref rendered);
Console.WriteLine($"Rendered {rendered}/{total} tiles");
});

//And this method blocks too, so if we get to this point, the tiling process has finished.
sw.Stop();
Console.WriteLine($"Rendered {rendered} tiles in {sw.Elapsed}");
}
}

Isn't this much easier to read and comprehend?

The implementation of the ComputeTileRequestList method referenced here is omitted for brevity, but for the implementation we can just reuse what is in the current iteration of MgCooker. Most of the settings in MgCooker mainly affects the generation of the list of row/col/scale anyways.

The core multi-threading "render/cache all these tiles" is just one simple Parallel.ForEach method baked right in the .net Framework itself!

MgCooker is overdue for a rewrite anyways. I just didn't really think it would be so conceptually simple with today's .net libraries and C# language constructs!
Categories: OSGeo Planet

PostGIS Development: PostGIS Patch Releases

OSGeo Planet - Wed, 2017-10-18 00:00

The PostGIS development team has uploaded bug fix releases for the 2.2, 2.3 and 2.4 stable branches.

2.2.6

2.3.4

2.4.1

Categories: OSGeo Planet

GeoTools Team: GeoTools 18.0 Released

OSGeo Planet - Tue, 2017-10-17 08:58
The GeoTools team is pleased to announce the release of GeoTools 18.0:This release is also available from our Maven repository.

Thanks to everyone who took part in the code-freeze, monthly bug stomp, or directly making the release. This release is made in conjunction with GeoServer 2.12.0

This release is the new stable release and as such users and downstream projects should consider moving from older releases to this one.
Highlights from our issue tracker release-notes:
  • GeoPackage store now supports spatial indexes.
  • WMTS store added this allows programs to process tiles in a similar way to the existing WMS store.
For more information see past release notes (18-RC1 | 18-beta).

Thanks to Astun Technology for allowing Ian Turton to make this release.
Categories: OSGeo Planet

GeoServer Team: GeoServer 2.12.0 Released

OSGeo Planet - Tue, 2017-10-17 08:55

We are happy to announce the release of GeoServer 2.12.0. Downloads are available (zipwardmg and exe) along with docs and extensions.

This is a stable release recommended for production use. This release is made in conjunction with GeoTools 18.0.

Rest API now using Spring MVC

In March, we upgraded the framework used by the GeoServer REST API from Restlet to Spring MVC. All the endpoints have remain unchanged and we would like to thank everyone who took part.

We should also thank David Vick who migrated the embedded GeoWebCache REST API, and the entire team who helped him reintegrate the results for this 2.12.0 release.

Thanks again to the code sprint sponsors and in-kind contributors:

Gaia3d   atol_logo  Boundless_Logo    How2map_logo     fossgis_logo  iag_logo  

As part of this upgrade, we also have new REST documentation, providing detailed information about each endpoint. The documentation is written in swagger, allowing different presentations to be generated as shown below.

 

WMTS Cascading

Adds the ability to create WMS layers backed by remote WMTS layers, similar to the pre-existing WMS cascading functionality.

See GSIP-162 for more details.

Style Based Layer Groups

Adds the ability to define a listing of layers and styles using a single SLD file, in accordance with the original vision of the SLD specification. This includes a new entry type in the Layer Group layers list and a new preview mode for the style editor.

GeoServer has long supported this functionality for clients, via an external SLD file. This change allows more people to use the idea of a single file defining their map layers and styling as a configuration option.

See GSIP-161 for more details.

Options for KML Placemark placement

New options for KML encoding have been added, to control the placement of placemark icons, mostly for polygons. The syntax of the new options introduces three new top-level format options keys:

&format_options=kmcentroid_contain:true;kmcentroid_samples:10;kmcentroid_clip:true

See GSIP-160 for more details.

GeoWebCache data security API

Add an extension point to GeoWebCache allowing for a security check based on the layer and extent of the tile. Adds an implementation of this extension point to GeoServer’s GWC integration.

This change mostly only affects developers but will lead to improved security for users in the future.

See GSIP 159 for more details.

NetCDF output support for variable attributes and extra variables

Adds the following to the NetCDF output extension:

  1. An option to allow all attributes to be copied from the source NetCDF/GRIB variable to the target variable.
  2. Support for manual configuration of variable attributes, much like the current support for setting global attributes.
  3. Support for configuration of extra variables which are copied from the NetCDF/GRIB source to the output; initially only scalar variables will be supported. Extra variables can be expanded to “higher” dimensions, that is, values copied from one scalar per ImageMosaic granule are assembled into a multidimensional variable over, for example, time and elevation.

See GSIP 158 for more details.

New labelling features and QGIS compatibility

A number of small new features have been added to labelling to match some of QGIS features, in particular:

  • Kerning is on by default
  • New vendor option to strikethrough text
  • New vendor options to control char and word spacing

../../../_images/charSpacing.png

  • Perpendicular offset now works also for curved labels (previously only supported for straight labels):
  • Labeling the border of polygons as opposed to their centroid when using a LinePlacement (here with repetition and offset):

Along with this work some SLD 1.1 text symbolizer fixes were added in order to better support the new QGIS 3.0 label export, here is an example of a map labeling with background image, as shown in QGIS, and then again in GeoServer using the same data and the exported SLD 1.1 style (click to enlarge):

   

CSS improvements

The CSS styling language and editing UI have seen various improvements. The editor now supports some primitive code completion:

At the language level:

  • Scale dependencies can now also be expressed using the “@sd” variable (scale denominator) and the values can use common suffixes such as k and M to get more readable values, compare for example “[@scale < 1000000]” with “[@sd < 1M]”
  • Color functions have been introduced to match LessCSS functionality, like “Darken”, “Lighten, “Saturate” and so on. The same functions have been made available in all other styling languages.
  • Calling a “env” variable has been made easier, from “env(‘varName’)” to “@varName” (or “@varName(defaultValue)” if you want to provide a default value).

As you probably already know, internally CSS is translated to an equivalent SLD for map rendering purposes. This translation process became 50 times faster over large stylesheets (such as OSM roads, a particularly long and complicated style).

Image mosaic improvements and protocol control

Image mosaic saw several improvements in 2.12.

First, the support for mosaicking images in different coordinate reference systems improved greatly, with several tweaks and correctness fixes. As a noteworthy change, the code can now handle source data crossing the dateline. The following images show the footprints of images before and after the dateline (expressed in two different UTM zones, 60 and 1 respectively) and the result of mosaicking them as rasters (click to get a larger picture of each):

There is more good news for those that handle mosaics with a lot of super-imposing images taken at different times. If you added interesting information into the mosaic index, such as cloud cover, off-nadir, snow cover and the like, you can now filter and sort them, in both WMS (viewing) and WCS (downloading) by adding the cql_filter and sortBy KVP parameters.

Here is an example of the same mosaic, the first composite favouring smallest cloud cover, the second one favouring recency instead (click to enlarge):

    

GeoPackage graduation

The GeoPackage store jumped straight from community to core package, in light of its increasing importance.

The WMS/WFS/WPS output formats are still part of community. Currently, GeoPackage vector does not support spatial indexes but stay tuned, it’s cooking!

New community modules

The 2.12 series comes with a few new community modules, in particular:

  • Looking into styling vector tiles and server side using a single language? Look no further than the MBStyle module
  • For those into Earth Observation, there is a new OpenSearch for EO module in the community section
  • Need to store full GeoTiff in Amazon S3? The “S3 support for GeoTiff” module might just be what you’re looking for
  • A new “status-monitoring” community module has been added, providing basic statistics system resource usage. Check out this pull request to follow its progress and merge.

Mind, community modules are not part of the release, but you can find them in the nightly builds instead.

Other assorted improvements

Highlights of this release featured below, for more information please see the release notes (2.12.0 | 2.12-RC12.12-beta):

  • Users REST uses default role service name as a user/group service name
  • imageio-ext-gdal-bindings-xxx.jar not available in geoserver-2.x.x-gdal-plugin.zip anymore since 2.10
  • REST GET resource metadata – file extension can override format parameter
  • GeoServer macOS picks up system extensions
  • SLD files not deleted when SLD is deleted in web admin
  • Reproject geometries in WMS GetFeatureInfo responses when info_format is GML
  • Include Marlin by default in bin/win/osx downloads, add to war instructions
  • Handle placemark placement when centroid of geometry not contained within
  • Enable usage of viewParams in WPS embedded WFS requests
  • Add GeoJson encoder for complex features
  • Allow image mosaic to refer a GeoServer configured store
  • Duplicate GeoPackage formats in layer preview page
  • ExternalGraphicFactory does not have a general way to reset caches
  • Generating a raster SLD style from template produced a functionally invalid style, now fixed
  • Style Editor Can Create Incorrect External Legend URLs
  • Namespace filtering on capabilities returns all layer groups (including the ones in other workspaces)

 

About GeoServer 2.12 Series

Additional information on the 2.12.0 series:

Categories: OSGeo Planet

gvSIG Team: SIG aplicado a Gestión Municipal: Módulo 5.2 ‘Servicios web (Carga de servicios web desde gvSIG Desktop)’

OSGeo Planet - Mon, 2017-10-16 08:30

Ya está disponible el segundo vídeo del quinto módulo, en el que veremos cómo cargar servicios web desde gvSIG Desktop. En el primer vídeo de este módulo vimos una introducción sobre las Infraestructuras de Datos Espaciales (IDE), que nos sirvió para poder entender mejor este nuevo vídeo.

Muchas administraciones tienen a disposición de los usuarios una gran cantidad de cartografía disponible, siendo en muchas ocasiones servicios web accesibles desde aplicaciones de escritorio o visores web, que nos permite acceder a dicha cartografía sin necesidad de descargar nada en nuestro disco.

La cartografía a utilizar en este módulo podéis descargarla del siguiente enlace.

El segundo vídeo-tutorial de este quinto módulo es el siguiente:

Post relacionados:


Filed under: gvSIG Desktop, IDE, spanish, training Tagged: IDE, Infraestructuras de Datos Espaciales, Servicios web, WFS, WMS
Categories: OSGeo Planet

Cameron Shorter: The Yin & Yang of OSGeo Leadership

OSGeo Planet - Sun, 2017-10-15 22:42

The 2017 OSGeo Board elections are about to start. Some of us who have been involved with OSGeo over the years have collated thoughts about the effectiveness of different strategies. Hopefully these thoughts will be useful for future boards, and charter members who are about to select board members.The Yin and Yang of OSGeoAs with life, there are a number of Yin vs Yang questions we are continually trying to balance. Discussions around acting as a high or low capital organisation; organising top down vs bottom up; populating a board with old wisdom or fresh blood; personal vs altruistic motivation; protecting privacy vs public transparency. Let’s discuss some of them here.Time vs MoneyOSGeo is an Open Source organisation using a primary currency of volunteer time. We mostly self-manage our time via principles of Do-ocracy and Merit-ocracy. This is bottom up.However, OSGeo also manages some money. Our board divvies up a budget which is allocated down to committees and projects. This is top-down command-and-control management. This cross-over between volunteer and market economics is a constant point of tension. (For more on the cross-over of economies, see Paul Ramsey’s FOSS4G 2017 Keynote, http://blog.cleverelephant.ca/2017/08/foss4g-keynote.html)High or low capital organisation?Our 2013 OSGeo Board tackled this question:https://wiki.osgeo.org/wiki/OSGeo_Board_:_Board_Priorities_2013#OSGeo_as_a_low_capital.2C_volunteer_focused_organisationShould OSGeo act as a high capital or low capital organisation? I.e., should OSGeo dedicate energy to collecting sponsorship and then passing out these funds to worthy OSGeo causes.While initially it seems attractive to have OSGeo woo sponsors, because we would all love to have more money to throw at worthy OSGeo goals, the reality is that chasing money is hard work. And someone who can chase OSGeo sponsorship is likely conflicted with chasing sponsorship for their particular workplace. So in practice, to be effective in chasing sponsorship, OSGeo will probably need to hire someone specifically for the role. OSGeo would then need to raise at least enough to cover wages, and then quite a bit more if the sponsorship path is to create extra value.This high capital path is how the Apache foundation is set up, and how LocationTech propose to organise themselves. It is the path that OSGeo started following when founded under the umbrella of Autodesk.However, as OSGeo has grown, OSGeo has slowly evolved toward a low capital volunteer focused organisation. Our overheads are very low, which means we waste very little of our volunteer labour and capital on the time consuming task of chasing and managing money. Consequently, any money we do receive (from conference windfalls or sponsorship) goes a long way - as it doesn't get eaten up by high overheads.Size and TitlesWithin small communities influence is based around meritocracy and do-ocracy. Good ideas bubble to the top and those who do the work decide what work gets done. Leaders who try to pull rank in order to gain influence quickly lose volunteers. Within these small communities, a person’s title hold little tradable value.However, our OSGeo community has grown very large, upward of tens of thousands of people. At this size, we often can’t use our personal relationships to assess reputation and trust. Instead we need to rely on other cues, such as titles and allocated positions of power.Consider also that OSGeo projects have become widely adopted. As such, knowledge and influence within an OSGeo community has become a valuable commodity. It helps land a job; secure a speaking slot at a conference; or get an academic paper published.This introduces a commercial dynamic into our volunteer power structures:
  • A title is sometimes awarded to a dedicated volunteer, hoping that it can be traded for value within the commercial economy. (In practice, deriving value from a title is much harder than it sounds).
  • There are both altruistic and personal reasons for someone to obtain a title. A title can be used to improve the effectiveness of the volunteer; or to improve the volunteers financial opportunities.
  • This can prompt questions of a volunteer’s motivations.
In response to this, over the years we have seen a gradual change to position of roles within the OSGeo community.Top-down vs bottom-upOSGeo board candidates have been asked for their “vision”, and “what they would like to change or introduce”. https://wiki.osgeo.org/wiki/Election_2017_Candidate_Manifestos  These are valid questions if OSGeo were run as a command-and-control top-down hierarchy; if board made decisions were delegated to OSGeo committees to implement. But OSGeo is bottom-up. Boards which attempt to centralise control and delegate tasks cause resentment and disengagement amongst volunteers. Likewise, communities who try to delegate tasks to their leaders merely burn out their leaders. Both are ignoring the principles of Do-ocracy and Merit-ocracy. So ironically, boards which do less are often helping more.Darwinian evolution means that only awesome ideas and inspiring leaders attract volunteer attention - and that is a good thing.Recognising ineffective control attemptsHow do you recognise ineffective command-and-control techniques within a volunteer community? Look for statements such as:
  • “The XXX committee needs to do YYY…”
  • “Why isn’t anyone helping us do …?”
  • “The XXX community hasn’t completed YYY requirements - we need to tell them to implement ZZZ”
If all the ideas from an organisation come from management, then management isn’t listening to their team.Power to the peopleIn most cases the board should keep out of the way of OSGeo communities. Only in exceptional circumstances should a board override volunteer initiatives.Decisions and power within OSGeo should be moved back into OSGeo committees, chapters and projects. This empowers our community, and motivates volunteers wishing to scratch an itch.We do want our board members to be enlightened, motivated and engaged within OSGeo. This active engagement should be done within OSGeo communities: partaking, facilitating or mentoring as required. A recent example of this was Jody Garnett’s active involvement with OSGeo rebranding - where he worked with others within the OSGeo marketing committee.Democratising key decisionsWhile we have a charter membership of nearly 400 who are tasked with ‘protecting’ the principles of the foundation and voting for new charter members and the board. Beyond this, charter members have had little way of engaging with the board to influence the direction of OSGeo.How can we balance the signal-to-noise ratio such that we can achieve effective membership engagement with the board without overwhelming ourselves with chatter? Currently we have no formal or prescribed processes for such consultation.ReimbursementOSGeo Board members are not paid for their services. However, they are regularly invited to partake in activities such as presenting at conferences or participating in meetings with other organisations. These are typically beneficial to both OSGeo and the leader’s reputation or personal interest. To avoid OSGeo Board membership being seen as a “Honey Pot”, and for the Board to maintain trust and integrity, OSGeo board members should refuse payment from OSGeo for partaking in such activities. (There is nothing wrong with accepting payment from another organisation, such as the conference organisers.)In response to the question of conferences, OSGeo has previously created OSGeo Advocates - an extensive list of local volunteers from around the world willing to talk about OSGeo. https://wiki.osgeo.org/wiki/OSGeo_Advocate Old vs newShould we populate our board with old wisdom or encourage fresh blood and new ideas? We ideally want a bit of both, bring wisdom from the past, but also spreading the opportunity of leadership across our membership. We should avoid leadership becoming an exclusive “boys club” without active community involvement, and possibly should consider maximum terms for board members.If our leadership follow a “hands off oversight role”, then past leaders can still play influential roles within OSGeo’s subcommittees.Vision for OSGeo 2.0Prior OSGeo thought leaders have suggested it’s time to grow from OSGeo 1.0 to OSGeo 2.0; time to update our vision and mission.  A few of those ideas have fed into OSGeo’s website revamp currently underway. This has been a good start, but there is still room to acknowledge that much has changed since OSGeo was born a decade ago, and there are plenty of opportunities to positively redefine ourselves. A test of OSGeo’s effectiveness is to see how well community ideas are embraced and taken through to implementation. This is a challenge that I hope will attract new energy and new ideas from a new OSGeo generation.Here are a few well considered ideas that have been presented to date that we can start from:RecommendationsSo where does this leave us.
  • Let’s recognise that OSGeo is an Open Source community, and we organise ourselves best with bottom-up Meritocracy and Do-ocracy.
  • Wherever possible, decisions should be made at the committee, chapter or project level, with the board merely providing hands-off oversight. This empowers and enables our sub-communities.
  • Let’s identify strategic topics where the OSGeo board would benefit from consultation with charter membership and work out how this could be accomplished efficiently and effectively.
  • Let’s embrace and encourage new blood into our leadership ranks, while retaining access to our wise old white beards.  
  • The one top-down task for the board is based around allocation of OSGeo’s (minimal) budget.
Categories: OSGeo Planet

Free and Open Source GIS Ramblings: Movement data in GIS #9: trajectory data models

OSGeo Planet - Sun, 2017-10-15 16:23

There are multiple ways to model trajectory data. This post takes a closer look at the OGC® Moving Features Encoding Extension: Simple Comma Separated Values (CSV). This standard has been published in 2015 but I haven’t been able to find any reviews of the standard (in a GIS context or anywhere else).

The following analysis is based on the official OGC trajcectory example at http://docs.opengeospatial.org/is/14-084r2/14-084r2.html#42. The header consists of two lines: the first line provides some meta information while the second defines the CSV columns. The data model is segment based. That is, each line describes a trajectory segment with at least two coordinate pairs (or triplets for 3D trajectories). For each segment, there is a start and an end time which can be specified as absolute or relative (offset) values:

@stboundedby,urn:x-ogc:def:crs:EPSG:6.6:4326,2D,50.23 9.23,50.31 9.27,2012-01-17T12:33:41Z,2012-01-17T12:37:00Z,sec
@columns,mfidref,trajectory,state,xsd:token,”type code”,xsd:integer
a, 10,150,11.0 2.0 12.0 3.0,walking,1
b, 10,190,10.0 2.0 11.0 3.0,walking,2
a,150,190,12.0 3.0 10.0 3.0,walking,2
c, 10,190,12.0 1.0 10.0 2.0 11.0 3.0,vehicle,1

Let’s look at the first data row in detail:

  • a … trajectory id
  • 10 … start time offset from 2012-01-17T12:33:41Z in seconds
  • 150 … end time offset from 2012-01-17T12:33:41Z in seconds
  • 11.0 2.0 12.0 3.0 … trajectory coordinates: x1, y1, x2, y2
  • walking …  state
  • 1… type code

My main issues with this approach are

  1. They missed the chance to use WKT notation to make the CSV easily readable by existing GIS tools.
  2. As far as I can see, the data model requires a regular sampling interval because there is no way to store time stamps for intermediate positions along trajectory segments. (Irregular intervals can be stored using segments for each pair of consecutive locations.)

In the common GIS simple feature data model (which is point-based), the same data would look something like this:

traj_id,x,y,t,state,type_code
a,11.0,2.0,2012-01-17T12:33:51Z,walking,1
a,12.0,3.0,2012-01-17T12:36:11Z,walking,1
a,10.0,3.0,2012-01-17T12:36:51Z,walking,2
b,10.0,2.0,2012-01-17T12:33:51Z,walking,2
b,11.0,3.0,2012-01-17T12:36:51Z,walking,2
c,12.0,1.0,2012-01-17T12:33:51Z,vehicle,1
c,10.0,2.0,2012-01-17T12:35:21Z,vehicle,1
c,11.0,3.0,2012-01-17T12:36:51Z,vehicle,1

The main issue here is that there has to be some application logic that knows how to translate from points to trajectory. For example, trajectory a changes from walking1 to walking2 at 2012-01-17T12:36:11Z but we have to decide whether to store the previous or the following state code for this individual point.

An alternative to the common simple feature model is the PostGIS trajectory data model (which is LineStringM-based). For this data model, we need to convert time stamps to numeric values, e.g. 2012-01-17T12:33:41Z is 1326803621 in Unix time. In this data model, the data looks like this:

traj_id,trajectory,state,type_code
a,LINESTRINGM(11.0 2.0 1326803631, 12.0 3.0 1326803771),walking,1
a,LINESTRINGM(12.0 3.0 1326803771, 10.0 3.0 1326803811),walking,2
b,LINESTRINGM(10.0 2.0 1326803631, 11.0 3.0 1326803811),walking,2
c,LINESTRINGM(12.0 1.0 1326803631, 10.0 2.0 1326803771, 11.0 3.0 1326803811),vehicle,1

This is very similar to the OGC data model, with the notable difference that every position is time-stamped (instead of just having segment start and end times). If one has movement data which is recorded at regular intervals, the OGC data model can be a bit more compact, but if the trajectories are sampled at irregular intervals, each point pair will have to be modeled as a separate segment.

Since the PostGIS data model is flexible, explicit, and comes with existing GIS tool support, it’s my clear favorite.

Read more:


Categories: OSGeo Planet

BostonGIS: Using pg_upgrade to upgrade PostGIS without installing an older version of PostGIS

OSGeo Planet - Sun, 2017-10-15 05:11

PostGIS releases a new minor version of PostGIS every one or two years. Each minor version of postgis has a different libname suffix. In PostGIS 2.1 you'll find files in your PostgreSQL lib folder called postgis-2.1.*, rtpostgis-2.1.*, postgis-topology-2.1.*, address-standardizer-2.1.* etc. and in a PostGIS 2.2 you'll find similar files but with 2.2 in the name. I believe PostGIS and pgRouting are the only extensions that stamp the lib with a version number. Most other extensions you will find are just called extension.so e.g. hstore is always called hstore.dll /hstore.so even if the version changed from 9.6 to 10. On the bright side this allows people to have two versions of PostGIS installed in a PostgreSQL cluster, though a database can use at most one version. So you can have an experimental database running a very new or unreleased version of PostGIS and a production database running a more battery tested version.

On the sad side this causes a lot of PostGIS users frustration trying to use pg_upgrade to upgrade from an older version of PostGIS/PostgreSQL to a newer version of PostGIS/PostgreSQL; as their pg_upgrade often bails with a message in the loaded_libraries.txt log file something to the affect:

could not load library "$libdir/postgis-2.2": ERROR: could not access file "$libdir/postgis-2.2": No such file or directory could not load library "$libdir/postgis-2.3": ERROR: could not access file "$libdir/postgis-2.3": No such file or directory

This is also a hassle because we generally don't support a newer version of PostgreSQL on older PostGIS installs because the PostgreSQL major version changes tend to break our code often and backporting those changes is both time-consuming and dangerous. For example the DatumGetJsonb change and this PostgreSQL 11 crasher we haven't isolated the cause of yet. There are several changes like this that have already made the PostGIS 2.4.0 we released recently incompatible with the PostgreSQL 11 head development.

Continue reading "Using pg_upgrade to upgrade PostGIS without installing an older version of PostGIS"
Categories: OSGeo Planet

Free and Open Source GIS Ramblings: Movement data in GIS extra: trajectory generalization code and sample data

OSGeo Planet - Fri, 2017-10-13 18:41

Today’s post is a follow-up of Movement data in GIS #3: visualizing massive trajectory datasets. In that post, I summarized a concept for trajectory generalization. Now, I have published the scripts and sample data in my QGIS-Processing-tools repository on Github.

To add the trajectory generalization scripts to your Processing toolbox, you can use the Add scripts from files tool:

It is worth noting, that Add scripts from files fails to correctly import potential help files for the scripts but that’s not an issue this time around, since I haven’t gotten around to actually write help files yet.

The scripts are used in the following order:

  1. Extract characteristic trajectory points
  2. Group points in space
  3. Compute flows between cells from trajectories

The sample project contains input data, as well as output layers of the individual tools. The only required input is a layer of trajectories, where trajectories have to be LINESTRINGM (note the M!) features:

Trajectory sample based on data provided by the GeoLife project

In Extract characteristic trajectory points, distance parameters are specified in meters, stop duration in seconds, and angles in degrees. The characteristic points contain start and end locations, as well as turns and stop locations:

The characteristic points are then clustered. In this tool, the distance has to be specified in layer units, which are degrees in case of the sample data.

Finally, we can compute flows between cells defined by these clusters:

Flow lines scaled by flow strength and cell centers scaled by counts

If you use these tools on your own data, I’d be happy so see what you come up with!

Read more:


Categories: OSGeo Planet

Equipo Geotux: Publicar un servicio de teselas de mapas con QMetatiles y GitHub

OSGeo Planet - Fri, 2017-10-13 15:07

Para realizar una publicación de un servicio WMTS usando simplemente el almacenamiento estático en un servidor Web, es necesario utilizar una serie de herramientas que permiten, en primera medida generar las teselas o baldosas de imágenes y segundo una herramienta que nos permita generar el archivo XML.. Para el primer caso, se va hacer uso de las siguientes extensiones o complementos de QGIS.

 

 


Nota: Este documento hace parte del material de guías de talleres del curso de Servicios Web Geográficos de la Maestría en Geomática de la Universidad Nacional de Colombia Sede Bogotá, mayor información en http://www.aulageo.cloud/course/unal-ogc-2017/ 1. IDENTIFICACIÓN DE LAS ESCALAS DE TESELAS CON EL COMPLEMENTO DE QGIS DE TILELAYER.

Una vez instalados los complementos de QGIS, se procede a cargar el proyecto de Laguna de Tota, recuerde que el sistema por defecto de este proyecto debe ser EPSG:3857 o Web Mercator, ya que las herramientas a utilizar sólo son compatibles con este sistema de referencia de coordenadas. Una vez el proyecto es desplegado, se procede a cargar el esquema de matrices de teselas desde el menú Web → TileLayer Plugin → Add TileLayer …, en este caso para el servicio WMTS es el esquema XYZFrame.

TileLayer Plugin

El esquema se representa con una nueva capa de nombre XYZFrame, y permite identificar cual es rango mínimo y máximo de zoom, así cómo el número e índice de las teselas. Para el siguiente caso, el zoom mínimo que permite el almacenamiento de nuestra zona de estudio en una tesela en 13, y el índice de origen en la matriz de teselas es 2434,3967.

 

2. GENERACIÓN DEL CONJUNTO DE MATRICES DE TESELAS COMPATIBLES CON GOOGLE CON EL COMPLEMENTO DE QGIS QMETATILES.

Una vez identificado los niveles de zoom o escalas de las matrices, se procede a generarlas las teselas con el complemento QMetaTiles disponible en la ruta del menú Complementos → QMetaTiles → QMetaTiles.

Los parámetros solicitados por esta herramienta son los que se muestran en la siguiente imagen.

QMetaTiles

  • Output: La ruta del directorio en el cual va a crear las teselas. Recomendable usar, si esta usando el servidor de GeoTux Server, la ruta que corresponda al punto de montaje /gisdata/tiles
  • Tileset name: hace referencia al nombre del proyecto o conjunto de matriz de teselas, en este caso “z11to17”.
  • Extent: Hace referencia a la extensión geográfica de la generación de teselas, para este caso use una capa para restringir la extensión geográfica para la generación de teselas.
  • Zoom: son los niveles de zoom para generar el conjunto de matrices de teselas, para este caso se ha identificado anteriormente un conjunto de matrices de teselas de 11 a 17.

Read more...
Categories: OSGeo Planet

Even Rouault: Optimizing JPEG2000 decoding

OSGeo Planet - Thu, 2017-10-12 16:41
Over this summer I have spent 40 days (*) in the guts of the OpenJPEG open-source library (BSD 2-clause licensed) optimizing the decoding speed and memory consumption. The result of this work is now available in the OpenJPEG 2.3.0 release.
For those who are not familiar with JPEG-2000, and they have a lot of excuse given its complexity, this is a standard for image compression, that supports lossless and lossy methods. It uses discrete wavelet transform for multi-resolution analysis, and a context-driven binary arithmetic coder for encoding of bit plane coefficients. When you go into the depths of the format, what is striking is the number of independent variables that can be tuned:
- use of tiling or not, and tile dimensions
- number of resolutions
- number of quality layers
- code-block dimensions
- 6 independent options regarding how code-blocks are encoded (code-block styles): use of Selective arithmetic coding bypass, use of Reset context probabilities on coding pass boundaries, use of Termination on each coding pass, use of Vertically stripe causal context, use of Predictable termination, use of Segmentation Symbols. Some can bring decoding speed advantages (notably selective arithmetic coding bypass), at the price of less compression efficiency. Others might help hardware based implementations. Others can help detecting corruption in the codestream (predictable termination)- spatial partition of code-blocks into so-called precincts, whose dimension may vary per resolution- progression order, ie the criterion to decide how packets are ordered, which is a permutation of the 4 variables: Precincts, Component, Resolution, Layer. The standard allows for 5 different permutations. To add extra fun, the progression order might be configured to change several times among the 5 possible (something I haven't yet had the opportunity to really understand)- division of packets into tile-parts
- use of multi-component transform or not
- choice of lossless or lossy wavelet transforms
- use of start of packet / end of packet markers
- use of  Region Of Interest, to have higher quality in some areas
- choice of image origin and tiling origins with respect to a reference grid (the image and tile origin are not necessarily pixel (0,0))
And if that was not enough, some/most of those parameters may vary per-tile! If you already found that TIFF/GeoTIFF had too many parameters to tune (tiling or not, pixel or band interleaving, compression method), JPEG-2000 is probably one or two orders of magnitude more complex. JPEG-2000 is truly a technological and mathematical jewel. But needless to say that having a compliant JPEG-2000 encoder/decoder, which OpenJPEG is (it is an official reference implementation of the standard) is already something complex. Having it perform optimally is yet another target.
Previously to that latest optimization round, I had already worked at enabling multi-threaded decoding at the code-block level, since they can be decoded independently (once you've re-assembled from the code-stream the bytes that encode a code-block), and in the inverse wavelet transform as well (during the horizontal pass, resp vertical pass, rows, resp columns, can be transformed independently). But the single-thread use had yet to be improved. Roughly, around 80 to 90% of the time during JPEG-2000 decoding is spent in the context-driven binary arithmetic decoder, around 10% in the inverse wavelet transform and the rest in other operations such as multi-component transform. I managed to get around 10% improvement in the global decompression time by porting to the decoder an optimization that had been proposed by Carl Hetherington for the encoding side, in the code that determines which bit of wavelet transformed coefficient must be encoded during which coding pass. The trick here was to reduce the memory needed for the context flags, so as to decrease the pressure on the CPU cache. Other optimizations in that area have consisted in making sure that some critical variables are kept preferably in CPU registers rather than in memory. I've spent a good deal of time looking at the disassembly of the compiled code.
I've also optimized the reversible (lossless) inverse transform to use the Intel SSE2 (or AVX2) instruction sets to be able to process several rows, which can result up to 3x speed-up for that stage (so a global 3% improvement)
I've also worked on reducing the memory consumption needed to decode images, by removing the use of intermediate buffers when possible. The result is that the amount of memory needed to do full-image decoding was reduced by 2.4.
Another major work direction was to optimize speed and memory consumption for sub-window decoding. Up to now, the minimal unit of decompression was a tile. Which is OK for tiles of reasonable dimensions (let's say 1024x1024 pixels), but definitely not on images that don't use tiling, and that hardly fit into memory. In particular, OpenJPEG couldn't open images of more than 4 billion pixels. The work has consisted in 3 steps :- identifying which precincts and code-blocks are needed for the reconstruction of a spatial region- optimize the inverse wavelet transform to operate only on rows and columns needed- reducing the allocation of buffers to the amount strictly needed for the subwindow of interestThe overall result is that the decoding time and memory consumption are now roughly proportional to the size of the subwindow to decode, whereas they were previously constant. For example decoding 256x256 pixels in a 13498x9944x3 bands image takes now only 190 ms, versus about 40 seconds before.
As a side activity, I've also fixed 2 different annoying bugs that could cause lossless encoding to not be lossless for some combinations of tile sizes and number of resolutions, or when some code-block style options were used.
I've just updated the GDAL OpenJPEG driver (in GDAL trunk) to be more efficient when dealing with untiled JPEG-2000 images.
There are many more things that could be done in the OpenJPEG library :- port a number of optimizations on the encoding side: multi-threadig, discrete wavelet transform optimizations, etc...- on the decoding side, reduce again the memory consumption, particularly in the untiled case. Currently we need to ingest into memory the whole codestream for a tile (so the whole compressed file, on a untiled image)- linked to the above, use of TLM and PLT marker segments (kind of indexes to tiles and packets)- on the decoding side, investigate further improvements for the code specific of irreversible / lossy compression- make the opj_decompress utility do a better use of the API and consume less memory. Currently it decodes a full image into memory instead of proceeding by chunks (you won't have this issue if using gdal_translate)- investigate how using GPGPU capabilities (CUDA or OpenCL) could help reduce the time spent in context-driven binary arithmetic decoder.
Contact me if you are interested in some of those items (or others !)


(*) funding provided by academic institutions and archival organizations, namely
… And logistic support from the International Image Interoperability Framework (IIIF), the Council on Library and Information Resources (CLIR), intoPIX, and of course the Image and Signal Processing Group (ISPGroup) from University of Louvain (UCL, Belgium) hosting the OpenJPEG project.
Categories: OSGeo Planet

Paul Ramsey: Adding PgSQL to PHP on OSX

OSGeo Planet - Thu, 2017-10-12 14:00

I’m yak shaving this morning, and one of the yaks I need to ensmooth is running a PHP script that connects to a PgSQL database.

No problem, OSX ships with PHP! Oh wait, that PHP does not include PgSQL database support.

Adding PgSQL to PHP on OSX

At this point, you can either run to completely replace your in-build PHP with another PHP (probably good if you’re doing modern PHP development and want something newer than 5.5) or you can add PgSQL to your existing PHP installation. I chose the latter.

The key is to build the extension you want without building the whole thing. This is a nice trick available in PHP, similar to the Apache module system for independent module development.

First, figure out what version of PHP you will be extending:

> php --info | grep "PHP Version" PHP Version => 5.5.38

For my version of OSX, Apple shipped 5.5.38, so I’ll pull down the code package for that version.

Then, unbundle it and go to the php extension directory:

tar xvfz php-5.5.38.tar.bz2 cd php-5.5.38/ext/pgsql

Now the magic part. In order to build the extension, without building the whole of PHP, we need to tell the extension how the PHP that Apple ships was built and configured. How do we do that? We run phpize in the extension directory.

> /usr/bin/phpize Configuring for: PHP Api Version: 20121113 Zend Module Api No: 20121212 Zend Extension Api No: 220121212

The phpize process reads the configuration of the installed PHP and sets up a local build environment just for the extension. All of a sudden we have a ./configure script, and we’re ready to build (assuming you have installed the MacOSX command-line developers tools with XCode).

> ./configure \ --with-php-config=/usr/bin/php-config \ --with-pgsql=/opt/pgsql/10 > make

Note that I have my own build of PostgreSQL in /opt/pgsql. You’ll need to supply the path to your own install of PgSQL so that the PHP extension can find the PgSQL libraries and headers to build against.

When the build is complete, you’ll have a new modules/ directory in the extension directory. Now figure out where your system wants extensions copied, and copy the module there.

> php --info | grep extension_dir extension_dir => /usr/lib/php/extensions/no-debug-non-zts-20121212 => /usr/lib/php/extensions/no-debug-non-zts-20121212 > sudo cp modules/pgsql.so /usr/lib/php/extensions/no-debug-non-zts-20121212

Finally, you need to edit the /etc/php.ini file to enable the new module. If the file doesn’t already exist, you’ll have to copy in the template version and then edit that.

sudo cp /etc/php.ini.default /etc/php.ini sudo vi /etc/php.ini

Find the line for the PgSQL module and uncomment and edit it appropriately.

;extension=php_pdo_sqlite.dll extension=pgsql.so ;extension=php_pspell.dll

Now you can check and see if it has picked up the PgSQL module.

> php --info | grep PostgreSQL PostgreSQL Support => enabled PostgreSQL(libpq) Version => 10.0 PostgreSQL(libpq) => PostgreSQL 10.0 on x86_64-apple-darwin15.6.0, compiled by Apple LLVM version 8.0.0 (clang-800.0.42.1)

That’s it!

Categories: OSGeo Planet
Syndicate content