OSGeo Planet

GeoSolutions: GeoServer Code Sprint needs you

OSGeo Planet - Tue, 2017-03-21 12:22


Dear Reader,

everything is ready in GeoSolutions for next week's GeoServer code sprint which will take place in our offices during the week of March 27th.

[caption id="attachment_3380" align="aligncenter" width="531"]Sprint 2017 Sprint 2017[/caption]

The main focus will be on refactoring GeoServer's REST API towards a more modern approach (se this page for some insights). A number of GeoServer developers from various organizations will gather from all over world for this work and your support in funding this initiative would help us out with the expenses, therefore, we are asking help to all our readers. Sponsorship opportunities are available for you to contribute on the OSGeo wiki.

Happy sprinting to everybody! The GeoSolutions team,
Categories: OSGeo Planet

gvSIG Team: Aprende los secretos del geoprocesamiento vectorial en gvSIG con este vídeo-tutorial

OSGeo Planet - Tue, 2017-03-21 09:01

En gvSIG Desktop disponemos de más de 350 geoprocesos, esto sin contar con plugins como el recientemente anunciado de Jgrass. Un buen porcentaje de esos geoprocesos se aplican sobre capas vectoriales, desde los más comunes -como el área de influencia, cortar, unir,…- a otros más específicos y menos conocidos.
Hoy os presentamos un vídeo-tutorial en el que en pocos minutos aprenderéis el funcionamiento de los geoprocesos de gvSIG, mediante una serie de ejercicios prácticos que nos permitan comprender la sencillez con la que podemos utilizar los algoritmos disponibles en la aplicación.
En la parte final del vídeo-tutorial podréis aprender a manejar el modelador de geoprocesos; una herramienta muy útil y no muy conocida por los usuarios de gvSIG.
Seguid leyendo…

Filed under: gvSIG Desktop, spanish Tagged: geoprocesamiento, geoprocesos, modelador
Categories: OSGeo Planet

GeoSolutions: New release of MapStore 2 with theming support

OSGeo Planet - Mon, 2017-03-20 11:03


Dear Reader,

we are pleased to announce a new release of MapStore 2, our flagship Open Source webgis product, which we have called 2017.02.00. The full list of changes for this release can be found here, but let us now concentrate on the latest most interesting additions.

Advanced Theming The main Feature of this release is the possibility to have different themes, as shown in the Gallery below. [gallery type="slideshow" ids="3319,3320,3324,3325,3326,3321,3327,3328,3329,3330"]

MapStore 2 was conceived with the goal to be highly customizable, therefore we have worked hard on the look and feel from the beginning to create a product that would easily adapt to predefined graphical guidelines as well as a framework which could be easy to integrate with 3rd party applications.

With this release the goal has been achieved. We have refactored the original theme simplifying greatly the steps to create new themes and switch between them. You can try to switch it live from the home page, there is a specific combo with some predefined styles.

On the technical side, we have refactored MapStore 2 theme support using less, hence now creating your own theme to match your company's visual design guidelines is very simple. We are developing an example that allows you to customize your theme directly from the web page, you can see it here below or test it here.

[gallery type="slideshow" ids="3333,3334"] Balloon Tutorial The balloon tutorial is now ready also with html support. You can try it live by clicking on "Tutorial" in the map's burger menu, as an instance in this map. [gallery type="slideshow" ids="3336,3337,3338,3339,3340"]   Notes for Developers

This release has a number of changes that are crucial to know for the developers since they break compatibility with older versions. Here, you can find the details of what we updated and how to migrate your application.

We strongly believe that these changes will speed up the development and improve the quality and the readability of the code (particularly redux-observable). If you will find yourself struggling with these changes, reach out for us on the developer mailing list and we will help you out.

We also looked around for a tool to produce developers' docs that would satisfy our needs in the longer term.  We found in docma a great tool as it allows us to provide both generic guides as well as to document our components, plugins and JavaScript API inline using jsDoc + Markdown. You can find the current version of the developer documentation here. Twitter Account

MapStore 2 has now its own twitter account which it is using to let us know how it feels as well as to share useful information and insights.

What we are working on for the next release

The main focus for the next release is the implementation of a JavaScript API to allow you to include MapStore 2 in your application or web-site and interact with it in more advanced ways than a simple IFRAME. We are also going to focus on the following items:

  • Improve developer's documentation
  • Improve the management of Maps, in order to allow users to manage them also from the map itself
  • Better interaction with WFS

In the longer term, we have a number of features and functionalities in our plans like editing, advanced templating, styling, OAUTH 2.0, and more…

So, Stay tuned and happy webmapping!

If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapstore, GeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions team,
Categories: OSGeo Planet

gvSIG Team: Disponible vídeo-tutorial para aprender Geoestadística con gvSIG

OSGeo Planet - Wed, 2017-03-15 09:06

El paquete estadístico R es uno de los más flexibles, potentes y profesionales que existen actualmente para realizar tareas estadísticas de todo tipo, desde las más elementales, hasta las más avanzadas. Y, lo más importante, es software libre.

Desde sus últimas versiones, gvSIG Desktop ha incluido plugins para integrar R, abriendo así la posibilidad a realizar todo tipo de análisis geoestadísticos.

Geólogos, biólogos, ecólogos, agrónomos, ingenieros, meteorólogos, sociólogos…por nombrar sólo unos pocos profesionales, requieren del estudio de información estadístico de información georreferenciada.

Desde la Asociación gvSIG os presentamos un vídeo-tutorial que os permitirá introduciros en el funcionamiento de la dupla gvSIG-R.

Si hemos despertado tú interés, sigue leyendo…

Filed under: gvSIG Desktop, spanish Tagged: Geoestadística, r
Categories: OSGeo Planet

GeoServer Team: GeoServer 2.11-RC1 Released

OSGeo Planet - Tue, 2017-03-14 07:23

We are happy to announce the release of GeoServer 2.11-RC1. Downloads are available (zipwardmg and exe) along with docs and extensions.

This is a release candidate of GeoServer not intended for production use. This release is made in conjunction with GeoTools 16-RC1 and GeoWebCache 1.11-RC1.

Thanks to everyone who provided feedback, bug reports and fixes. Here are some of the changes included in 2.11-RC1:

  • Incompatibilities with GeoFence and Control-flow have been resolved
  • Empty WFS Transactions (which perform no changes) no indicating everything has changed
  • Improvements to WFS GetFeature support for 3D BBOX requests
  • We have one known regression with the windows service installer (memory setting is incorrect)
  • Please additional details see the release notes (2.11-RC12.11-beta )
Release Candidate Testing

The 2.11 release is expected in March, this release candidate is a “dry run” where we confirm new functionality is working as expected and double check the packaging and release process.

Please note that GeoServer 2.9 has reached its end-0f-life. If your organization has not yet upgraded please give us hand by evaluating 2.11-RC1 and providing feedback and your experiences for the development team. This is a win/win situation where your participation can both assist the GeoServer team and reduce your risk when upgrading.

Corrected default AnchorPoint for LabelPlacement

An issue with SLD 1.0 rendering has been fixed – when a LabelPlacement did not include a AnchorPoint we were using the wrong default!

  • BEFORE: default anchor point was X=0.5 and Y=0.5 – which is at the middle height and middle length of the label text.
  • AFTER: default anchor point was X=0.0 and Y=0.5 – which is at the middle height of the lefthand
    side of the label text.

This is a long standing issue that was only just noticed in February. If you need to “restore” the incorrect behaviour please startup with -Dorg.geotools.renderer.style.legacyAnchorPoint=true system property.

Startup Performance

With extensive improvements to startup performance and OGC requests for large installations we are looking forward to feedback from your experience testing.

About GeoServer 2.11

GeoServer 2.11 is scheduled for March 2017 release. This puts GeoServer back on our six month “time boxed” release schedule.

  • OAuth2 for GeoServer (GeoSolutions)
  • YSLD has graduated and is now available for download as a supported extension
  • Vector tiles has graduate and is now available for download as an extension
  • The rendering engine continues to improve with underlying labels now available as a vendor option
  • A new “opaque container” layer group mode can be used to publish a basemap while completely restricting access to the individual layers.
  • Layer group security restrictions are now available
  • Latest in performance optimizations in GeoServer (GeoSolutions)
  • Improved lookup of EPSG codes allows GeoServer to automatically match EPSG codes making shapefiles easier to import into a database (or publish individually).
Categories: OSGeo Planet

Even Rouault: Dealing with huge vector GeoPackage databases in GDAL/OGR

OSGeo Planet - Sat, 2017-03-11 22:25
Recently, I've fixed a bug in the OGR OpenFileGDB driver, the driver made from the reverse engineering the ESRI FileGeoDatabase format, so as to be able to read tables whose section that enumerates and describes fields is located beyond the first 4 GB of the file. This table from the 2016 TIGER database is indeed featuring all linear edges of the USA and is 15 GB large (feature and spatial indexes included), with 85 million features.

Some time before, I had to deal with a smaller database - 1.7 GB as GeoPackage - with 5.4 million polygons (bounding box) from the cadastre of an Italian province. One issue I noticed is that when you want to get the summary of the layer, with ogrinfo -al -so the.gpkg, it was very slow. The reason is that this summary includes the feature count, and there's no way to get this metadata quickly, apart from running the "SELECT COUNT(*) FROM the_table" request, which causes a full scan of the table. For small databases, this runs fast, but when going into the gigabyte realm, this can take several dozains of seconds. But getting the spatial extent of the layer, which is one of the other information displayed by the summary mode of ogrinfo, is fast since the gpkg_contents "system" table of a GeoPackage database includes the bounding box of the table. So my idea was to extend the definition of the gpkg_contents table to add a new column, ogr_feature_count, to store the feature count. I went to implement that, and it worked fine. The synchronization of the value of ogr_feature_count after edits can be done with 2 SQLite triggers, on row insertion and deletion, and that  works with implementations that are not aware of the existence of this new column. Like older OGR versions. Unfortunately it appears that at least one other implementation completely rejected such databases. There is some inconsistency in the GeoPackage specification if additional columns are accepted or not in system tables. From the /base/core/contents/data/table_def test case, "Column order, check constraint and trigger definitions, and other column definitions in the returned sql are irrelevant.", it would seem that additional columns should still be considered as a valid GeoPackage. Anyway, that's only the theory and we don't want to break interoperability for just a nice-to-have feature... So I went to change the design a bit and created a new table, gpkg_ogr_contents, with a table_name and feature_count columns. I'm aware that I should not borrow the gpkg_ prefix, but I felt it was safer to do so since other implementations will probably ignore any unknown gpkg_ prefixed table. And the addition of the ogr_ prefix makes collisions with future extension of the GeoPackage specification unlikely. The content of this table is also maintained in synchronization with the data table thanks to two triggers, and this makes the other software that rejected my first attempt happy. Problem solved.

Let's come back to our 13 GB FileGeoDatabase. My first attempt to convert is to GeoPackage with ogr2ogr resulted in converting the features in about half an hour, but once this 100% stage reached, the finalization, which includes building the spatial index took ages. So long, that after a whole night it wasn't yet finished and seriously making the computer non responsive due to massive I/O activity. In the GeoPackage driver, the spatial index is indeed created after feature insertion, so that the feature table and spatial index tables are well separated in the file, and from previous experiment with the Spatialite driver, it proved to be the right strategy. Populating the SQLite R-Tree is done with a simple statement: INSERT INTO my_rtree SELECT fid, ST_MinX(geom), ST_MaxX(geom), ST_MinY(geom), ST_MaxY(geom) FROM the_table. Analyzing what happens in the SQLite code is not easy when you are not familiar with that code base, but my intuition is that there was constant back and forth between the geometry data area and the RTree area in the file, making the SQLite page cache inefficient. So I decided to experiment with a more progressive approach. Let's iterate over the feature table and collect the fid, minx, max, miny, maxy by chunks of 100 000 rows, and the insert those 100 000 bounding boxes into the R-Tree, and loop again unil we have completely read the feature table. With such a strategy, the spatial index can now be built in 4h30. The resulting GeoPackage file weights 31.6 GB, so twice at large than the FileGeoDatabase. One of the reasons for the difference must be due to geometries in FileGeoDatabase being compressed (quantization for coordinate precision, delta encoding and use of variable integer) whereas GeoPackage uses a uncompressed SQLite BLOB based on OGC WKB.
My first attempt at opening it in QGIS resulted in the UI to be frozen, probably for hours. The reason is that QGIS always issues a spatial filter, even when requesting on a area of interest that is at least as large as the extent of the layer, where there is no performance gain to expect from using it. So the first optimization was in the OGR GeoPackage to detect that situation and to not translate the OGR spatial filter as SQLite R-Tree filter. QGIS could now open the database and progressively displays the features. Unfortunately when zooming in, the UI became frozen again. When applying a spatial filter, the GeoPackage driver created a SQL request like the following one:
SELECT * FROM the_table WHERE fid IN        (SELECT id FROM the_rtree WHERE         xmin <= bbox_xmax AND xmax >= bbox_xmin AND        ymin <= bboy_ymay AND ymay >= bboy_ymin)
It turns out that the sub-select (the one that fetches the feature ID from the spatial index) is apparently entirely run before the outer select (the one that returns geometry and attributes) starts being evaluated. This way of expressing the spatial filter came from the Spatialite driver (since GeoPackage and Spatialite use the exact same mechanisms for spatial indexing), itself based on examples from an old Spatialite tutorial. For not too big databases, this runs well. After some experiment, it turns out that doing a JOIN between the feature table and the RTree virtual table makes it possible to have a non blocking request:
SELECT * FROM the_table t JOIN the_rtree r ON t.fid = r.idWHERE r.xmin <= bbox_xmax AND r.xmax >= bbox_xmin AND      r.ymin <= bboy_ymax AND r.ymax >= bboy_ymin
Now QGIS is completely responsive, although I find that even on high zoom levels, the performance is somehow disappointing, ie features appear rather slowly. There seems to be some threshold effect on the size of the database, since the performance is rather good on the Italian province cadastral use case.

Another experiment showed that increasing the SQLite page size from 1024 bytes (the default in SQLite 3.11 or earlier) to 4096 bytes (the default since SQLite 3.12) decreases the database size to 28.8 GB. This new page size of 4096 bytes is now used by default by the OGR SQLite and GPKG drivers (unless OGR_SQLITE_PRAGMA=page_size=xxxx is specified as a configuration option).

I also discovered that increasing the SQLite page cache from its 2 MB default to 2 GB (with --config OGR_SQLITE_CACHE 2000) significantly improved the time to build the spatial index, decreasing the total conversion time from 4h30 to 2h10. 2GB is just a value selected at random. It might be too large or perhaps a larger value would help further.

All improvements mentionned above (faster spatial index creation, better use of spatial index and change of default page size) are now in GDAL trunk, and will be available in the upcoming 2.2.0 release.
Categories: OSGeo Planet

gvSIG Team: 3rd Catedra gvSIG Contest

OSGeo Planet - Thu, 2017-03-09 18:34

The aim of the Cátedra gvSIG is to create a meeting point for users interested in free space technologies. In order to foment an environment of shared knowledge and participating in the dissemination of free geomatics, the chair organizes this international contest to encourage all gvSIG users and free Geographic Information Systems users to share and give visibility to their work.

Students and graduates in high school, professional training and university, as well as university professors and researchers from all countries can participate in this contest. To enter to the competition you must meet the following requirements: Works must be done with free Geographic Information Systems and the subject of the work may address any area of knowledge. Works may have been made in 2016 or later, the papers may be presented collectively and individually and jobs may be sent in Spanish, Valencian or English.

In the event the work is based on a new development done through free and open source GIS geospatial technologies, these papers must be subjected to GNU / GPL v3 license. Among the selected works a prize of 500 euros will be awarded for each of the following categories:

  • Work produced by students of highschool or professional training.

  • Final University’s Project (Bachelor, Degree or Master).

  • Doctoral thesis or research paper.

Submissions should be sent to gvsig@umh.es and press@gvsig.com no later than November 1, 2017. Selected documents will be published in the repository of the Cátedra gvSIG UMH. The jury will evaluate the methodology, clarity and innovative nature of the work, assessing as well the relevance and applicability of the research.

Winners will be announced in the next International gvSIG Conference.

Filed under: english, events, press office Tagged: awards, contest, open source
Categories: OSGeo Planet

GeoServer Team: REST API Code Sprint Prep

OSGeo Planet - Thu, 2017-03-09 15:34

In our previous blog post we highlighted the GeoServer Code Sprint 2017 taking place at the of this month. We are all looking forward to GeoSolutions hosting us in beautiful Tuscany and have lots of work to do.

One of the secrets (and this comes as no surprise) to having a successful code sprint is being prepared. With this year’s REST API migration from restlet to spring model-view-controller we want to have all technical decisions made, and examples for the developers to work from, prior to any boots hitting the ground in Italy.

But before we get into the details …

Code Sprint Sponsors

We would like to thank our sprint sponsors – we are honoured that so many organizations have stepped up world wide to fund this activity.

Gaia3D is a professional software company in the field of geospatial information and Earth science technology. We would like to thank Gaia3D for their gold sponsorship.


Insurance Australia Group (IAD) is our second gold sponsor. This is a great example of open source being used, and supported, by an engineering team. Thanks to Hugh Saalmans and the Location Engineering team at IAD for your support.


Boundless is once again sponsoring the GeoServer team. Boundless provides a commercially supported open source GIS platform for desktop, server, mobile and cloud. Thanks to Quinn Scripter and the Boundless suite team for their gold sponsorship.



How 2 Map is pleased to support this year’s event with a bronze sponsorship.


I am overjoyed FOSSGIS (German local OSGeo chapter) is supporting us with a bronze sponsorship. This sponsorship means a lot to us as the local chapter program focuses on users and developers; taking the time to support our project directly is a kind gesture.



Sponsorship Still Needed

While we have a couple of verbal commitments to sponsor – we are still $1500 USD off the pace. If your organization has some capacity to financially support this activity we would dearly love your support.

This is an official OSGeo activity; any excess money is returned to the foundation to help the next open source sprint.  OSGeo sponsorship is cumulative. Check their website for details on how your helping out the geoserver team can be further recognized.

For sponsorship details visit the wiki page (or contact Jody Garnett for assistance).

Update: Since this post was published we are happy to announce new sponsor(s).

Thanks to Caroline Chanlon and the team at Atol Conseils et Développements for bronze sponsorship.


Update: Thanks to David Ghedini (acugis.com) and others donating smaller amounts via the OSGeo paypal button.

Getting Ready for REST

In this week’s GeoServer meeting we had a chance to sit down and plan out the steps needed to get ready.

The majority of prep will go into performing the restlet to spring mvc migration for a sample REST API end point to produce a “code example” for developers to follow. We have selected the rest/styles endpoint as one of the easier examples:

  1. Preflight check: Before we start we want to have a good baseline of the current REST API responses. We would like to double check that each endpoint has a JUnit test case that checks the response against a reference file. Most of our tests just count the number of elements, or drill into the content to look for a specific value. The goal is to use these reference files as a quick “regression test” when performing the migration.
  2. Migrate rest/styles from StyleResource (restlet) to StyleController (spring): This should be a bit of fun, part of why spring model-view-controller was selected. Our goal is to have one Controller per end-point, and configure the controller using annotations directly in the Java file. This ends up being quite readable with variable names being taken directly out of the URL path. It is also easier to follow since you do not have to keep switching between XML and Java files to figure out what is going on.  It is important that the example is “picture perfect” as it will be used as a template by the developers over the course of the sprint, and will be an example of the level of quality we expect during the activity.
  3. Create StyleInfo bindings (using XStream for xml and json generation): The above method returns a StyleInfo data structure, our current restlet solutions publishes each “resource” using the XStream library. We think we can adapt our XStream work for use in spring model-view-controller by configuring a binding for StyleInfo and implementing in using XStream.  This approach is the key reason we are confident in this migration being a success; existing clients that depend on exactly the same output from GeoServer – should get exactly the same output.
  4. StyleController path management: There is some work to configure each controller, while we have the option of doing some custom logic inside each controller we would like to keep this to a minimum.  This step is the small bit of applicationContext.xml configuration work we need to do for each controller, we expect it to be less work then reslet given the use of annotations.
  5. Reference Documentation Generation: We are looking into a tool called swagger for documentation generation. Our current reference documentation only lists each end-point (and does not provide information on the request and response expected – leaving users to read the examples or try out the api in an ad-hoc fashion). See screen snap below, our initial experience is positive, but the amount of work required is intimidating.
  6. Updated examples for cURL and Python: We would like to rewrite our examples in a more orderly fashion to make sure both XML and JSON sample requests and responses are provided. Ideally we will inline the “reference files” from the JUnit regression test in step 1 to ensure that the documentation is both accurate and up to date.

You can see a pretty even split in our priorities between performing the migration, and updating the documentation. We believe both of these goals need to be met for success.

Next stop Tuscany

Although this blog post focuses on the sponsorship/planning/logistics side of setting up a code sprint there is one group without whom this event could not happen – our sprint participants and in-kind sponsors (providing a venue & staff).

Thanks to GeoSolutions for hosting us, and to Astun, Boundless, GeoSolutions for the hands-on participation that makes this sprint possible

For more information:

Categories: OSGeo Planet

gvSIG Team: Geoprocesamiento desde Scripting en gvSIG. Vídeo-tutorial disponible.

OSGeo Planet - Thu, 2017-03-09 10:01

Tras la publicación del webinar “Aprende a programar en gvSIG en media hora” os presentamos un complemento perfecto “Geoprocesamiento desde Scripting en gvSIG”.

Por Geoprocesamiento entendemos a las operaciones de tratamiento o manipulación de datos espaciales realizadas en un Sistema de Información Geográfica. gvSIG, con más de 350 geoprocesos, tiene un potencial enorme como software de geoprocesamiento. Potencial que puede ampliarse gracias al scripting.

En este nuevo webinar podréis aprender a acceder desde scripting a los distintos geoprocesos de gvSIG (y de otras librerías) por medio de una librería denominada gvPy, ejecutar geoprocesos con una simple línea de código, convertir modelos de geoprocesos en scripts,…y todo ello lo veréis -tras una breve introducción teórica- mediante ejercicios prácticos.

Si hemos despertado vuestro interés, seguid leyendo…

Filed under: gvSIG Desktop, spanish Tagged: geoprocesamiento, geoprocesos, gvPy, python, scripting, webinar
Categories: OSGeo Planet

SourcePole: FOSSGIS 2017 in Passau

OSGeo Planet - Wed, 2017-03-08 08:19

In zwei Wochen beginnt die alljährliche deutschsprachige FOSSGIS Konferenz zum Theme Open Source GIS und OpenStreetMap in Passau. Vom 22.-25. März 2017 wird die FOSSGIS-Konferenz mit Untersützung der Universität Passau in der Dreiflüssestadt Passau stattfinden. Die FOSSGIS-Konferenz ist die im D-A-CH-Raum führende Konferenz für Freie und Open Source Software für Geoinformationssysteme sowie für die Themen OpenStreetMap und Open Data. An vier Tagen werden in Vorträgen für Einsteiger und Experten, Hands-On Workshops und Anwendertreffen Einblicke in neuste Entwicklungen und Anwendungsmöglichkeiten von Softwareprojekten gegeben.

 Tobias Hobmeier (CC-BY-SA)

Sourcepole ist wieder mit einem Stand vertreten und lädt zu interessanten Workshops und Vorträgen ein:

  • Mittwoch 17:00 - Workshop Entwicklung von QGIS Plugins
  • Donnerstag 14:30 (HS 9) - QGIS Server Projektstatus
  • Donnerstag 14:30 (HS 11) - Von WMS zu WMTS zu Vektor-Tiles
  • Donnerstag 15:00 (HS 9) - QGIS Web Client 2

Das gesamte Programm ist auch als praktische Android App erhältlich und die Online-Anmeldung ist noch bis am 19.3. offen.

Wir freuen uns auf eine interessante Konferenz!

Categories: OSGeo Planet

gvSIG Team: Learning GIS with Game of Thrones (X): Legends

OSGeo Planet - Wed, 2017-03-08 06:20

Today we are going to learn about how to change the symbology of a layer, reviewing different types of legends that are available in gvSIG Desktop.

The symbology is one of the most important properties of a layer. gvSIG includes a great variety of options to represent layers with symbols, graphs and colours. Excepting unique symbol option, the rest of legends are assigned to each element depending on their attribute values and the properties of the type of legend selected.

By default, when a layer is added to a View, it’s represented with a unique symbol with random colour, that means, all the elements of the layer are represented with the same symbol. To modify the symbology of a layer we have to access to its “Properties” window and select “Symbology” tab. We are going to open our “Game of Thrones” project and we will start to explore this part of gvSIG Desktop.

If we want to change a symbol, the easiest way is double-clicking on it at the ToC (Table of Contents with the list of layers). A new window will be opened to select the new symbol. For example we are going to double-click on the symbol of the “Rivers” layer.

At the new window we can change the colour and the width of the line, press on any of the symbol libraries installed (“gvSIG Basic” by default, although we can install a lot of libraries from the Add-ons Manager). In this case we are going to change width to 3 and select a dark blue colour. We press “Accept” to apply changes. 075_got

Now we are going to see the type of legends that are available and we will select a legend by the different types of locations, attribute that we have used in the previous posts. There are a lot of possibilities for symbology, and you can see this additional documentation.

Firstly we will have to open the “Properties” window of the layer. Activating the layer we will find this option at the “Layer/Properties” menu, or directly using the secondary button of the mouse on it. 

Now we access to the “Symbology” tab and a window is shown with the symbology applied. At the left side we can find all the types of symbols than we can use. Warning: Depending on the type of layer (point, line or polygon) we can find different legends available.

In this case we are going to select a legend about “Categories/Unique values”. This type of legend is used to assign a symbol to each unique value specified at the attribute table of the layer. Each element is drawn depending on the value of an attribute that identifies the category. In our case we will select “Type” for classification field; we press “Add all” and it will show the legend created by default:

The Labels (at the right side) can be modified. You can change the texts here.

Now, double-clicking on every symbol a new window will be opened where we can modify them or select new symbols from our symbol libraries with “Select symbol” option. Once they are selected we press “Apply” and we will see the results in our View. 

The best way to learn different type of legends is testing… We also recommend you to install and check the different symbol libraries that are available in gvSIG (hundreds of symbols of all types!!)

See you in the next post…

Filed under: english, gvSIG Desktop, training Tagged: Game of Thrones, legends, symbology
Categories: OSGeo Planet

gvSIG Team: Aprendiendo SIG con Juego de Tronos (XV y final): Instalación de complementos

OSGeo Planet - Wed, 2017-03-08 05:58

Dedicaremos este último post al “Administrador de complementos”, una herramienta que todo usuario de gvSIG Desktop debería conocer.

El administrador de complementos es una funcionalidad que permite personalizar gvSIG, instalando nuevas extensiones, ya sean funcionales o de otro tipo (bibliotecas de símbolos). Se ejecuta desde el menú “Herramientas/Administrador de complementos”, aunque también se puede acceder a él durante el proceso de instalación.

Gracias al “Administrador de complementos” podéis acceder, además de a plugins no instalados por defecto, a todas las nuevas herramientas que se vayan publicando.

En la ventana que aparece lo primero que debéis seleccionar es la fuente de instalación de los complementos:

Los complementos pueden tener 3 orígenes:

  • El propio binario de instalación. El archivo de instalación que nos hemos descargado contiene un gran número de complementos o plugins, algunos de los cuales no se instalan por defecto, pero están disponibles para su instalación. Esto permite poder personalizar gvSIG sin disponer de conexión a internet.

  • Instalación a partir de archivo. Podemos tener un archivo con un conjunto de extensiones listas para instalarse en gvSIG.

  • A partir de URL. Mediante una conexión a Internet podemos acceder a todos los complementos disponibles en el servidor de gvSIG e instalar aquellos que necesitemos.  Es la opción recomendada si se quieren consultar todos los plugins disponibles.

Una vez seleccionada la fuente de instalación, pulsáis el botón de “Siguiente”, lo que nos mostrará el listado de complementos disponibles.

La interfaz del administrador de complementos se divide en 4 partes:

  1. Listado de complementos disponibles. Se indica el nombre del complemento, la versión y el tipo. Las casillas de verificación permiten diferenciar entre complementos ya instalados (color verde) y disponibles (color blanco). Puede ser interesante que revises el significado de cada uno de los iconos.

  2. Área de información referente al complemento seleccionado en “1”.

  3. Área que muestra las “Categorías” y “Tipos” en que se clasifican los complementos. Pulsando en los botones de “Categorías” y “Tipos” se actualiza la información de esta columna. Al seleccionar una categoría o tipo del listado se ejecuta un filtro que mostrará en “1” solo los complementos relacionados con esa categoría o tipo.

  4. Filtro rápido. Permite realizar un filtro a partir de una cadena de texto que introduzca el usuario.

En nuestro caso vamos a instalar una nueva biblioteca de símbolos. Para ello pulsaremos en la categoría “Symbols”, lo que nos filtrará entre los plugins que son “bibliotecas de símbolos”:

A continuación marcamos la biblioteca “G-Maps”:

Pulsamos el botón “Siguiente” y, una vez acabada la instalación, el botón “Terminar”. Un mensaje nos indicará que es necesario reiniciar (en el caso de instalar plugins funcionales es así, pero no es necesario cuando instalamos bibliotecas de símbolos).

Si ahora vamos a cambiar la simbología de alguna de nuestras capas, por ejemplo “Locations”, veremos que ya tenemos los nuevos símbolos disponibles:

Podéis echar un vistazo a las bibliotecas de símbolos disponibles en la documentación.

Y con este último post acabamos este atípico curso de introducción a los SIG. Esperamos que hayáis aprendido y, además, os haya resultado tan divertido como a nosotros hacerlo.

A partir de aquí ya estáis preparados para profundizar en la aplicación e ir descubriendo toda su potencia. Un último consejo…utilizad las lista de usuarios para consultar cualquier duda o comunicarnos cualquier problema con el que os encontréis:


Y recordad…gvSIG is coming!


Filed under: gvSIG Desktop, spanish Tagged: administrador de complementos, bibliotecas de símbolos, extensiones, Juego de tronos, plugins
Categories: OSGeo Planet

Jackie Ng: React-ing to the need for a modern MapGuide viewer (Part 14): The customization story so far.

OSGeo Planet - Tue, 2017-03-07 14:40
I've been getting an increasing amount of questions lately about "How do you do X?" with mapguide-react-layout. So the purpose of this post is to lay out the customization story so far, so you have a good idea of whether the thing you want to do with this viewer is possible or not.

Before I start, it's best to divide this customization story into two main categories:

  1. Customizations that reside "inside" the viewer
  2. Customizations that reside "outside" the viewer
What is the distinction? Read on.
Customizations "inside" the viewer
I define customizations "inside" the viewer as customizations:
  • That require no modifications to the entry point HTML file that initializes and starts up the viewer. To use our other viewer offerings as an analogy, your customizations work with the AJAX/Fusion viewers as-is without embedding the viewer or modifying any of the template HTML.
  • That are represented as commands that reside in either a toolbar or menu/sub-menu and registered/referenced in your Web Layout or Application Definition
  • Whose main UI reside in the Task Pane or a floating or popup window and uses client-side APIs provided by the viewer for interacting with the map.
These customizations are enabled in our existing viewer offerings through:
  • InvokeURL commands/widgets
  • InvokeScript commands/widgets
  • Client-side viewer APIs that InvokeURL and InvokeScript commands can use 
  • Custom widgets

    From the perspective of mapguide-react-layout, here is what's supported
    InvokeURL commands
    InvokeURL commands are fully supported and do what you expect from our existing viewer offerings:
    • Load a URL (that normally renders some custom UI for displaying data or interacting with the map) into the Task Pane or a floating/popup window.
    • It is selection state aware if you choose to set the flag in the command definition.
    • It will include whatever parameters you have specified in the command definition into the URL that is invoked.
    If most/all of your customizations are delivered through InvokeURL commands, then mapguide-react-layout already has you covered.
    InvokeScript commands
    InvokeScript commands are not supported and I have no real plans to bring such support across. I have an alternate replacement in place, which will require you to roll your own viewer. 
    Client-side viewer APIs
    If you use AJAX viewer APIs in your Task Pane content for interacting with the map, they are supported here as well. Most of the viewer APIs are mostly implemented, short of a few esoteric APIs.
    If your client-side code is primarily interacting with APIs provided by Fusion, you're out of luck at the moment as none of the Fusion client-side APIs have been ported across. I have no plans to port these APIs across 1:1, though I do intend to bring across some kind of pub/sub event system so your client-side code has the ability to respond to events like selection changed, etc.
    Custom Widgets
    In Fusion, if InvokeURL/InvokeScript widgets are insufficient for your customization needs, this is where you would create a custom widget. Like the replacement for InvokeScript commands I intend to enable a similar system once again through custom builds of the mapguide-react-layout viewer.

    My personal barometer for how well mapguide-react-layout supports "inside" customizations is the MapGuide PHP Developer's Guide samples. 

    If you load the Web Layout for this sample in the mapguide-react-layout viewer, you will see all of the examples (and the viewer APIs they demonstrate) all work as before. If your customizations are similar in nature to what is demonstrated in the MapGuide PHP Developer's Guide samples, then things should be smooth sailing.

    Customizations "outside" the viewer
    I define customizations "outside" the viewer as primarily being one of 2 things:
    • Embedding the viewer in a frame/iframe or a DOM element that is not full width/height and providing sufficient APIs so that code in the embedding content document can interact with the viewer or for code in the embedding content document to be able to listen on certain viewer events.
    • Being able to init the viewer with all the required configuration (ie. You do not intend to pass a Web Layout or Application Definition to init this viewer)
    On this front, mapguide-react-layout doesn't offer much beyond a well-defined entry point to init and mount the viewer component.
    Watch this space for how I hope to tackle this problem.
    Rolling your own viewer
    The majority of the work done since the last release is to enable the scenario of being able to roll your own viewer. By being able to roll your own viewer, you will have full control over viewer customization for things the default viewer bundle does not support, such as:
    • Creating your own layout templates
    • Creating your own script commands
    • Creating your own components
    If you do decide to go down this path, there will be some things that you should become familiar with:
    • You are familiar with the node.js ecosystem. In particular, you know how to use npm/yarn
    • You are familiar with webpack
    • Finally, you are familiar with TypeScript and have some experience with React and Redux
    Basically, if you go down this road you should have a basic idea of how frontend web development is done in the current year of 2017, because it is no longer manually editing HTML files, script tags and sprinkles of jQuery.
    Because what I intend to do allow for this scenario is to publish the viewer as an npm module. To roll your own viewer, you would npm/yarn install the mapguide-react-layout module, write your custom layouts/commands/components in TypeScript, and then set up a webpack configuration to pull it all together into your own custom viewer bundle.
    I hope to have an example project available (probably in a different GitHub repository) when this is ready that demonstrates how to do this.
    In Closing
    When you ask the question of "How can I do X?" in mapguide-react-layout, you should reframe the question in terms of whether the thing you are trying to do is "inside" or "outside" the viewer. If it is "inside" the viewer and you were able to do this in the past with the AJAX/Fusion viewers through the extension points and APIs offered, chances are very high that similar equivalent functionality has already been ported across.
    If you are trying to do this "outside" the viewer. you'll have to wait for me to add whatever APIs and extension points are required.
    Failing that, you will have the ability to consume the viewer as an npm module and roll your own viewer with your specific customizations.
    Failing that? 
    You could always fork the GitHub repo and make whatever modifications you need. But you should not have to go that far.
    Categories: OSGeo Planet

    gvSIG Team: Aprende a programar en gvSIG en media hora

    OSGeo Planet - Tue, 2017-03-07 10:16

    Es frecuente que desde la Asociación gvSIG impartamos talleres de scripting en las diversas jornadas gvSIG que se realizan alrededor del mundo. Y es interesante asistir a estos talleres de “observador” porque permite ver como entran alumnos sin ningún conocimiento de programación en gvSIG Desktop y salen con la base necesaria para poder comenzar a desarrollar sus propios scripts en la aplicación.

    Ese es uno de los objetivos principales del scripting, dar a todo tipo de usuarios -no necesariamente programadores- un mecanismo para que de forma muy sencilla puedan desarrollar aplicaciones o herramientas sobre gvSIG Desktop.

    ¿Tan, tan sencillo como para aprender scripting en media hora?

    Ese es el reto que nuestro compañero de la Asociación gvSIG, Óscar Martínez, se ha planteado con el webinar que realizamos en la Universidad Miguel Hernández y del que ahora tenéis disponible el vídeo.

    Reservad media hora de vuestro tiempo y seguid leyendo...

    Filed under: gvSIG Desktop, spanish Tagged: desarrollo, jython, python, quick start, scripting, tutorial
    Categories: OSGeo Planet

    Stefano Costa: Numbering boxes of archaeological items, barcodes and storage management

    OSGeo Planet - Tue, 2017-03-07 08:23

    Last week a tweet from the always brilliant Jolene Smith inspired me to write down my thughts and ideas about numbering boxes of archaeological finds. For me, this includes also thinking about the physical labelling, and barcodes.

    Question for people who organize things for their job. I'm giving a few thousand boxes unique IDs. should I go random or sequential?

    — Jolene Smith (@aejolene) March 3, 2017

    The question Jolene asks is: should I use sequential or random numbering? To which many answered: use sequential numbering, because it bears significance and can help detecting problems like missing items, duplicates, etc. Furthermore, if the number of items you need to number is small (say, a few thousands), sequential numbering is much more readable than a random sequence. Like many other archaeologists faced with managing boxes of items, I have chosen to use sequential numbering in the past. With 200 boxes and counting, labels were easily generated and each box had an associated web page listing the content, with a QR code providing a handy link from the physical label to the digital record. This numbering system was put in place during 3 years of fieldwork in Gortyna and I can say that I learned a few things in the process. The most important thing is that it’s very rare to start from scratch with the correct approach: boxes were labeled with a description of their content for 10 years before I adopted the numbering system pictured here. This sometimes resulted in absurdly long labels, easily at risk of being damaged, difficult to search since no digital recording was made. I decided a numbering system was needed because it was difficult to look for specific items, after I had digitised all labels with their position in the storage building (this often implied the need to number shelves, corridors, etc.). The next logical thing was therefore to decouple the labels from the content listing ‒ any digital tool was good here, even a spreadsheet. Decoupling box number from description of content allowed to manage the not-so-rare case of items moved from one box to another (after conservation, or because a single stratigraphic context was excavated in multiple steps, or because a fragile item needs more space …), and the other frequent case of data that is augmented progressively (at first, you put finds from stratigraphic unit 324 in it, then you add 4.5 kg of Byzantine amphorae, 78 sherds of cooking jars, etc.). Since we already had a wiki as our knowledge base, it made sense to use that, creating a page for each box and linking from the page of the stratigraphic unit or that of the single item to the box page (this is done with Semantic MediaWiki, but it doesn’t matter). Having a URL for each box I could put a QR code on labels: the updated information about the box content was in one place (the wiki) and could be reached either via QR code or by manually looking up the box number. I don’t remember the details of my reasoning at the time, but I’m happy I didn’t choose to store the description directly inside the QR code ‒ so that scanning the barcode would immediately show a textual description instead of redirecting to the wiki ‒ because that would require changing the QR code on each update (highly impractical), and still leave the information unsearchable. All this is properly documented and nothing is left implicit. Sometimes you will need to use larger boxes, or smaller ones, or have some items so big that they can’t be stored inside any container: you can still treat all of these cases as conceptual boxes, number and label them, give them URLs.

    QR codes used for boxes of archaeological items in Gortyna

    There are limitations in the numbering/labelling system described above. The worst limitation is that in the same building (sometimes on the same shelf) there are boxes from other excavation projects that don’t follow this system at all, and either have a separate numbering sequence or no numbering at all, hence the “namespacing” of labels with the GQB prefix, so that the box is effectively called GQB 138 and not 138. I think an efficient numbering system would be one that is applied at least to the scale of one storage building, but why stop there?

    Turning back to the initial question, what kind of numbering should we use? When I started working at the Soprintendenza in Liguria, I was faced with the result of no less than 70 years of work, first in Ventimiglia and then in Genoa. In Ventimiglia, each excavation area got its “namespace” (like T for the Roman theater) and then a sequential numbering of finds (leading to items identified as T56789) but a single continuous sequential sequence for the numbering of boxes in the main storage building. A second, newer building was unfortunately assigned a separate sequence starting again from 1 (and insufficient namespacing). In Genoa, I found almost no numbering at all, despite (or perhaps, because of) the huge number of unrelated excavations that contributed to a massive amount of boxes. Across the region, there are some 50 other buildings, large and small, with boxes that should be recorded and accounted for by the Soprintendenza (especially since most archaeological finds are State property in Italy). Some buildings have a numbering sequence, most have paper registries and nothing else. A sequential numbering sequence seems transparent (and allows some neat tricks like the German tanks problem), since you could potentially have an ordered list and look up each number manually, which you can’t do easily with a random number. You also get the impression of being able to track gaps in a sequence (yes, I do look for gaps in numeric sequences all the time), thus spotting any missing item. Unfortunately, I have been bitten too many times by sequential numbers that turned out to have horrible bis suffixes, or that were only applied to “standard” boxes leaving out oversized items.

    On the other hand, the advantages of random numbering seem to increase linearly with the number of separate facilities ‒ I could replace random with non-transparent to better explain the concept. A good way to look at the problem is perhaps to ask whether numbering boxes is done as part of a bookkeeping activity that has its roots in paper registries, or it is functional to the logistics of managing cultural heritage items in a modern and efficient way.

    Logistics. Do FedEx, UPS, Amazon employees care what number sequence they use to track items? Does the cashier at the supermarket care whether the EAN barcode on your shopping items is sequential? I don’t know, but I do know that they have a very efficient system in place, in which human operators are never required to actually read numerical IDs (but humans are still capable of checking whether the number on the screen is the same as the one printed on the label). There are many types of barcode used to track items, both 1D and 2D, all with their pros and cons. I also know of some successful experiments with RFID for archaeological storage boxes (in the beautiful depots at Ostia, for example), that can record numbers up to 38 digits.

    Based on all the reflections of the past years, my idea for a region- or state-wide numbering+labeling system is as follows (in RFC-style wording):

    1. it MUST use a barcode as the primary means of reading the numerical ID from the box label
    2. the label MUST contain both the barcode and the barcode content as human-readable text
    3. it SHOULD use a random numeric sequence
    4. it MUST use a fixed-length string of numbers
    5. it MUST avoid the use of any suffixes like a, b, bis

    In practice, I would like to use UUID4 together with a barcode.

    A UUID4 looks like this: 1b08bcde-830f-4afd-bdef-18ba918a1b32. It is the UUID version of a random number, it can be generated rather easily, works well with barcodes and has a collision probability that is compatible with the scale I’m concerned with ‒ incidentally I think it’s lower than the probability of human error in assigning a number or writing it down with a pencil or a keyboard. The label will contain the UUID string as text, and the barcode. There will be no explicit URL in the barcode, and any direct link to a data management system will be handled by the same application used to read the barcode (that is, a mobile app with an embedded barcode reader). The data management system will use UUID as part of the URL associated with each box. You can prepare labels beforehand and apply them to boxes afterwards, recording all the UUIDs as you attach the labels to the boxes. It doesn’t sound straightforward, but in practice it is.

    And since we’re deep down the rabbit hole, why stop at the boxes? Let’s recall some of the issues that I described non-linearly above:

    1. the content of boxes is not immutable: one day item X is in box Y, the next day it gets moved to box Z
    2. the location of boxes is not immutable: one day box Y is in room A of building B, the next day it gets moved to room C of building D
    3. both #1 and #2 can and will occur in bulk, not only as discrete events

    The same UUIDs can be applied in both directions in order to describe the location of each item in a large bottom-up tree structure (add as many levels as you see fit, such as shelf rows and columns):

    item X → box Y → shelf Z → room A → building B


    b68e3e61-e0e7-45eb-882d-d98b4c28ff31 → 3ef5237e-f837-4266-9d85-e08d0a9f4751 3ef5237e-f837-4266-9d85-e08d0a9f4751 → 77372e8c-936f-42cf-ac95-beafb84de0a4 77372e8c-936f-42cf-ac95-beafb84de0a4 → e895f660-3ddf-49dd-90ca-e390e5e8d41c e895f660-3ddf-49dd-90ca-e390e5e8d41c → 9507dc46-8569-43f0-b194-42601eb0b323

    Now imagine adding a second item W to the same box: since the data for item Y was complete, one just needs to fill one container relationship:

    b67a3427-b5ef-4f79-b837-34adf389834f → 3ef5237e-f837-4266-9d85-e08d0a9f4751

    and since we would have already built our hypothetical data management system, this data is filled into the system just by scanning two barcodes on a mobile device that will sync as soon as a connection is available. Moving one box to another shelf is again a single operation, despite many items actually moved, because the leaves and branches of the data tree are naïve and only know about their parents and children, but know nothing about grandparents and siblings.

    There are a few more technical details about data structures needed to have a decent proof of concept, but I already wrote down too many words that are tangential to the initial question of how to number boxes.

    Categories: OSGeo Planet

    gvSIG Team: Aprendiendo SIG con Juego de Tronos (XIV): Mapas

    OSGeo Planet - Mon, 2017-03-06 21:28

    En este penúltimo post del curso para aprender la bases de los Sistemas de Información Geográfica mediante ejercicios prácticos con datos de Juego de Tronos vamos a trabajar con el documento “Mapa”.

    Un documento Mapa es un conjunto de elementos de diseño de un mapa o plano, organizados en una página virtual y cuyo objetivo es su salida gráfica (impresión o exportación a PDF). Lo que se ve en el diseño es lo que se obtiene al imprimir o exportar el mapa al mismo tamaño de página definido. En un Mapa se pueden insertar dos tipos de elementos: Elementos cartográficos y de diseño.

    En nuestro caso vamos a crear un mapa con la ruta seguida por los hermanos Greyjoy y dibujada en el post sobre “Edición gráfica”.

    Una vez tenemos abierto nuestro proyecto en gvSIG, lo primero que haremos es ir a la ventana de “Gestor de proyecto”. Una forma rápida de hacerlo es mediante el menú “Mostrar/Gestor de proyecto”. Seleccionamos el tipo de documento “Mapa” y pulsamos el botón de nuevo. Se nos abrirá una nueva ventana donde definiremos la características de la página de Mapa.

    En nuestro caso seleccionaremos un “Tamaño de página” de “A4”, con “Orientación” “Horizontal” y le indicaremos que utilice la Vista donde tenemos nuestras capas cargadas en lugar de “Crear nueva Vista”. Si tenéis más de una Vista en vuestro proyecto, aparecerá un listado con todas ellas.

    Veréis que crea un nuevo mapa, en el que se ha insertado la Vista indicada y que ocupa toda la superficie de la página:

    Pulsando sobre los “cuadrados negros” que aparecen en las esquinas y puntos medios del rectángulo que define la extensión de la Vista podemos cambiar su tamaño. De este modo vamos definiendo nuestro diseño del mapa. Haciendo clic sobre el elemento Vista insertado y arrastrando podemos desplazarlo. En nuestro caso redimensionamos la Vista insertada y la desplazamos, pasando a continuación a añadir otros elementos cartográficos.

    La mayoría de los elementos cartográficos están íntimamente ligados a un documento Vista, de modo que al realizar cambios en la Vista, pueden verse reflejados en el mapa (cambios de zoom, desplazamientos, modificación de leyendas, organización de capas, etc.). Estas herramientas están disponibles desde el menú “Mapa/Insertar“ y en la barra de botones correspondiente.

    Vamos a comenzar por insertar la leyenda. Esta herramienta está disponible desde el menú “Mapa/Insertar/Leyenda“ o con su botón:

    La leyenda siempre se asocia con una Vista insertada en el Mapa y permite representar la simbología de las distintas capas de esa Vista. Una vez seleccionada la herramienta, se indicará el primer extremo del rectángulo que define el espacio a ocupar por la leyenda haciendo clic sobre el área de Mapa en el lugar deseado, y arrastrando hasta soltar en el extremo opuesto. Se mostrará un cuadro de diálogo en el que puede definir las propiedades gráficas de la leyenda insertada:

    En esta ventana podemos marcar que capas (su simbología) queremos que aparezca en la leyenda.

    A continuación pasamos a insertar un símbolo de Norte. Esta herramienta está disponible desde el menú “Mapa/Insertar/Norte“ y en su botón correspondiente:

    Una vez seleccionada la herramienta, se indicará el primer extremo del rectángulo que define el espacio a ocupar por el símbolo de norte haciendo clic sobre el área de Mapa en el lugar deseado, y arrastrando hasta soltar en el extremo opuesto. Se mostrará un cuadro de diálogo en el que puede definir las propiedades gráficas del norte insertado:

    Y nuestro Mapa tendrá el siguiente aspecto:

    Para finalizar insertaremos un título con la herramienta de “Insertar texto” (en el menú Mapa/Insertar/Texto o en su botón correspondiente). El funcionamiento es similar al de los otros elementos, y en este caso lo que indicaremos es el texto que queremos que aparezca: “Greyjoy Brothers”.

    A partir de aquí y por no alargar demasiado el post os encomendamos a que reviséis la documentación relacionada con el documento Mapa y que vayáis probando a insertar escalas gráficas, cajetines, etc. así como a probar las herramientas de ayuda al dibujo…con práctica os pueden quedar mapas realmente bien diseñados.

    Una vez tengáis vuestro mapa acabado podéis exportarlo a PDF con el botón:

    Ya podéis enviar vuestro archivo PDF a todos vuestros contactos.

    Como suelen decir, la práctica hace al maestro…así que ya sabéis.

    Queda un post para despedir el curso…no os lo perdáis.

    Filed under: gvSIG Desktop, spanish Tagged: Exportar a PDF, Juego de tronos, mapa
    Categories: OSGeo Planet

    Paul Ramsey: Christy Clark's $1M Faux Conference Photo-op

    OSGeo Planet - Mon, 2017-03-06 20:00

    On February 25, 2013, Christie Clark mounted the stage at the “International LNG Conference” in Vancouver to trumpet her government’s plans to have “at least one LNG pipeline and terminal in operation in Kitimat by 2015 and three in operation by 2020”.

    Christy Clark's $1M Faux Conference Photo-op

    Notwithstanding the Premier’s desire to frame economic devopment as a triumph of will, and notwithstanding the generous firehosing of subsidies and taxbreaks on the still nascent sector, the number of LNG pipelines and terminals in operation in Kitimat remains stubbornly zero. The markets are unlikely to relent in time to make the 2020 deadline.

    And about that “conference”?

    Like the faux “Bollywood Awards” that the government paid $10M to stage just weeks before the 2013 election, the “LNG in BC” conference was a government organized “event” put on primarily to advance the pre-election public relations agenda of the BC Liberal party.

    In case anyone had any doubts about the purpose of the “event”, at the 2014 edition an exhibitor helpfully handed out a brochure to attendees, featuring an election night picture of the Premier and her son, under the title “We Won”.

    We Won

    The “LNG in BC” conference continued to be organized by the government for two more years, providing a stage each year for the Premier and multiple Ministers to broadcast their message.

    The government is no longer organizing an annual LNG confab, despite their protestations that the industry remains a key priority. At this point, it would generate more public embarassment than public plaudits.

    Instead, we have a new faux “conference”, slated to run March 14-15, just four weeks before the 2017 election begins: the #BCTech Summit.

    Like “LNG in BC”, the “BCTech Summit” is a government-organized and government-funded faux industry event, put on primarily to provide an expensive backdrop for BC Liberal politicking.

    BC Innovation Council

    The BC Innovation Council (BCIC) that is co-hosting the event is itself a government-funded advisory council run by BC Liberal appointees, many of whom are also party donors. To fund the inaugural 2016 version of the event, the Ministry of Citizens Services wrote a direct award $1,000,000 contract to the BCIC.

    The pre-election timing is not coincidental, it is part of a plan that dates all the way back to early 2015, when Deputy Minister Athana Mentzelopoulos directed staff to begin planning a “Tech Summit” for spring of the following year.

    “We will not be coupling the tech summit with the LNG conference. Instead, the desire is to plan for the tech summit annually over the next couple of years – first in January 2016 and then in January 2017.” – email from A. Mentzelopoulos, April 8, 2015

    The intent of creating a “conference” to sell a new-and-improved government “jobs plan”, and the source of that plan, was made clear by the government manager tasked with delivering the event.

    “The push for this as an annual conference has come from the Premier’s Office and they want to (i) show alignment with the Jobs Plan (including the LNG conference) and (ii) show this has multi-ministry buy-in and participation.” – S. Butterworth, April 24, 2015

    The event was not something industry wanted. It was not even something the BCIC wanted. It was something the Premier’s Office wanted.

    And so they got it: everyone pulled together, the conference was put on, and it made a $1,000,000 loss which was dutifully covered by the Province via the BC Innovation Council, laying the groundwork for 2017’s much more politically potent version.

    This year’s event will be held weeks before the next election. It too will be subsidized heavily by the government. And as with the LNG conference, exhibitors and sponsors will plunk down some money to show their loyalty to the party of power.

    LNG BC Sponsors

    The platinum sponsors of LNG in BC 2015 were almost all major LNG project proponents: LNG Canada, Pacific Northwest LNG, and Kitimat LNG. Were they, like sponsors at a normal trade conference, seeking to raise their profile among attendees? Or were they demonstrating their loyalty to the government that organized the event and then approached them for sponsorship dollars?

    It is hard to avoid the conclusion that these events are just another conduit for cash in our “wild west” political culture, a culture that shows many of the signs of “systematic corruption” described by economist John Wallis in 2004.

    “In polities plagued with systematic corruption, a group of politicians deliberately create rents by limiting entry into valuable economic activities, through grants of monopoly, restrictive corporate charters, tariffs, quotas, regulations, and the like. These rents bind the interests of the recipients to the politicians who create them.”

    Systematically corrupt governments aren’t interested in personally enriching their members, they are interested in retaining and reinforcing their power, through a virtuous cycle of favours: economic favours are handed to compliant economic actors who in turn do what they can to protect and promote their government patrons.

    Circle of Graft

    The 2017 #BCTech conference already has a title sponsor: Microsoft. In unrelated news, Microsoft is currently negotiating to bring their Office 365 product into the BC public sector. If the #BCTech conference was an ordinary trade show, these two unrelated facts wouldn’t be cause for concern. But because the event is an artificially created artifact of the Premier’s Office, a shadow is cast over the whole enterprise.

    Who is helping who here, and why?

    A recent article in Macleans included a telling quote from an anonymous BC lobbyist:

    If your client doesn’t donate, it puts you at a competitive disadvantage, he adds. It’s a small province, after all; the Liberals know exactly who is funding them, the lobbyist notes, magnifying the role donors play and the access they receive in return.

    As long as BC remains effectively a one-party state, the cycle of favors and reciprocation will continue. Any business subject to the regulatory or purchasing decisions of government would be foolish not to hedge their bets with a few well-placed dollars in the pocket of BC’s natural governing party.

    The cycle is systematic and self-reinforcing, and the only way to break the cycle, is to break the cycle.

    Categories: OSGeo Planet

    gvSIG Team: The video of the “Geopaparazzi and gvSIG” workshop is now available

    OSGeo Planet - Mon, 2017-03-06 16:32

    During the 12th International gvSIG Conference there was a workshop about “Geopaparazzi and gvSIG” in English.

    At this workshop, attendees could learn about how to install Geopaparazzi, load layers, enter data, export data to gvSIG Desktop…

    The workshop was recorded, so you can follow it now if you couldn’t attend it.

    Here you have the video:

    The data used at the workshop are available here.

    And the gvSIG plugins for Geopaparazzi here.

    with some instructions for the installation.

    Finally, you can take a look at this interesting presentation about the state of the art of Geopaparazzi and its integration in gvSIG given at the 12th International gvSIG Conference, State of the art of Geopaparazzi: towards gvSIG Mobile:



    Filed under: english, events, Geopaparazzi, gvSIG Desktop, technical collaborations, training
    Categories: OSGeo Planet

    GeoSolutions: Finalmente disponibile il profilo DCAT-AP IT per CKAN

    OSGeo Planet - Mon, 2017-03-06 13:39


    Dear Reader,

    We apologize in advance, but this post is for our italian readers (hence in Italian only) to announce that we have finalized the implementation of the DCAT-AP_IT Metadata Profile leveraging on the CKAN Open Data product.

    Siamo lieti di annunciare il primo rilascio dell’estensione CKAN per il supporto al profilo applicativo  DCAT-AP_IT nei portali open data italiani. Lo sviluppo è stato sostenuto, in uno sforzo congiunto, dalla Provincia di Bolzano/Sud Tirol e dalla Provincia di Trento ed è disponibile gratuitamente con licenza  AGPL v3.0.

    Il profilo per la documentazione dei dati delle pubbliche amministrazioni (DCAT-AP_IT), reso disponibile dall’Agenzia per l’Italia Digitale (AgID), nasce con l’obiettivo di armonizzare i metadati con cui vengono descritti i dataset pubblici, al fine di migliorarne la qualità e favorire il riuso delle informazioni. L’estensione ckanext-dcatapit, sviluppata da GeoSolutions, è disponibile su una respository dedicata sotto il nostro account GitHub con un’accurata documentazione che aiuta a comprenderne le caratteristiche, i requisiti e le specifiche di installazione e configurazione (presto la repository sarà portata sotto l'egida di AgID). Le pubbliche amministrazioni italiane potranno quindi utilizzare l’estensione per rendere i propri cataloghi conformi al profilo italiano DCAT-AP_IT e favorire le pratiche di condivisione e standardizzazione con le altre PA del territorio italiano.

    [caption id="attachment_3307" align="alignnone" width="515"]Scheda di visualizzazione del dataset Scheda di visualizzazione del dataset[/caption]

    Grazie alla notevole esperienza acquisita dal team di sviluppo di GeoSolutions nella realizzazione di numerose estensioni per CKAN, nonché nell’installazione e configurazione di molteplici cataloghi che utilizzano questa piattaforma, l’estensione ckanext-dcatapit fornisce un insieme valido ed eterogeneo di funzionalità non solo per la creazione guidata di datasets, ma anche per l’integrazione di metadati provenienti da sorgenti esterne (CSW, RDF, JSON-LD) in conformità al Profilo Applicativo.

    [caption id="attachment_3308" align="alignnone" width="515"]Form di modifica del dataset Form di modifica del dataset[/caption]

    L’estensione ckanext-dcatapit è stata sviluppata con scrupolosa attenzione non solo in merito alle sue funzionalità caratteristiche e alla loro stabilità, ma anche per garantire la più alta compatibilità possibile con le altre estensioni spesso presenti nelle piattaforme CKAN installate. In aggiunta favorisce l’integrazione con eventuali estensioni custom che necessitano di definire campi aggiuntivi ai dataset. Anche gli aspetti legati al multilinguismo e la localizzazione dell’interfaccia sono stati affrontati e resi disponibili per garantire la massima usabilità da parte di quelle realtà che li necessitano, come per esempio le Provincie di Bolzano/Sud Tirol e Trento: l’estensione fornisce i propri files di localizzazione che aiutano a snellire eventuali personalizzazioni in questi termini, mentre l’estensione ckanext-multilang fornisce supporto per il multilinguismo dei contenuti presenti nel catalogo (dataset, organizzazioni, gruppi e altro).

    [caption id="attachment_3309" align="alignnone" width="515"]Multilinguismo al lavoro Multilinguismo al lavoro[/caption] Di seguito un elenco delle PA che già usano ckanext-dcatapit:
    • Il portale OpenData della Provincia di Bolzano/Sud Tirol.
    • Il portale OpenData del Trentino.
    • L’infrastruttura federata OpenDataNetwork (per il momento ancora in test) per il capofila Città Metropolitana di Firenze, che raccoglie e distribuisce i dati di vari enti toscani tra cui: Città Metropolitana di Firenze, Provincia di Prato, Provincia di Pistoia ed Autorità di Bacino dell’Arno.
    Invitiamo tutti coloro che sono interessati a partecipare allo sforzo per lo sviluppo di questa estensione o che fossero interessati ad utilizzare questa estensione a seguire il nostro blog o iscriversi alla nostra newsletter; raccomandiamo di visionare anche i nostri pacchetti di supporto professionale GeoSolutions Enterprise Support Services nel caso si volesse usufruire di un supporto attento e qualificato per la messa in produzione di questa estensione. Allo stesso modo vi invitiamo a visionare le informazioni sugli altri nostri prodotti Open Source quali GeoServerMapstore, GeoNode e GeoNetwork. The GeoSolutions team,
    Categories: OSGeo Planet
    Syndicate content