OSGeo Planet

gvSIG Team: Aprende a trabajar con Modelos Digitales del Terreno y geoprocesamiento ráster con este vídeo-tutorial

OSGeo Planet - 9 hours 1 min ago

Complementando el post en el que os enseñamos los secretos del geoprocesamiento vectorial, hoy veremos como podemos aplicar algunos de los algoritmos disponibles en gvSIG Desktop a las capas ráster en general, y a los Modelos Digitales del Terreno en particular.

De los más de 350 geoprocesos disponibles en gvSIG, un buen número de ellos son aplicables a capas ráster, permitiéndonos hacer cálculos de todo tipo y que son especialmente útiles para algunas disciplinas científicas como la hidrología.

Mediante una serie de ejercicios prácticos, que podréis replicar en vuestra casa, este vídeo-tutorial os mostrará en pocos minutos como realizar geoprocesamiento ráster. Seguid leyendo….


Filed under: gvSIG Desktop, spanish Tagged: geoprocesamiento, hidrología, MDT, Modelo Digital del Terreno, raster
Categories: OSGeo Planet

Jackie Ng: React-ing to the need for a modern MapGuide viewer (Part 15): Play with it on docker

OSGeo Planet - Wed, 2017-03-22 16:22
Today, I found a very interesting website from the tech grapevine:

http://play-with-docker.com

What is this site? It is an interactive docker playground. If you've ever used sites like JSFiddle to try out snippets of JS/HTML/CSS, this is basically the docker equivalent to try out Docker environments.

With PWD, I now have a dead simple way for anyone who wants to try out this viewer to spin up a demo MapGuide instance on PWD for themselves to check out the viewer.

Once you've proved to the site that you're are indeed a human and not a robot, you will enter the PWD console. From here, click + ADD NEW INSTANCE to start a new shell.



Then run the following commands to build the demo docker image and spin up the container

git clone https://github.com/jumpinjackie/mapguide-react-layout
cd mapguide-react-layout
./demo.sh

After a few minutes, you should see a port number appear beside the IP address



This is a link to the default Apache httpd page that is confirmation that the demo container is serving out web content to the outside world.



Now simply append /mapguide/index.php to that URL to access the demo landing page for this viewer. Pick any template on the list to load the viewer using that template.



You now have a live demo MapGuide Server with mapguide-react-layout (and the Sheboygan dataset) preloaded for you to play with to your heart's content for the next 4 hours, after which PWD will terminate your session and all the docker images/containers/etc that you created with it.

This was just one use case that I thought up in 5 minutes after discovering this awesome site! I'm sure there's plenty of other creative uses for such a site like this.

Many thanks to brucepc for his MGOS 3.1 docker image from which the demo image is based from.
Categories: OSGeo Planet

Jackie Ng: gRPC is very interesting

OSGeo Planet - Wed, 2017-03-22 16:19
MapGuide in its current form is a whole bucket of assorted libraries and technologies:
  • We use FDO for spatial data access
  • We use ACE (Adaptive Communication Environment) for:
    • Basic multi-threading primitives like mutexes, threads, etc
    • TCP/IP communication between the Web Tier and the Server Tier
    • Implementing a custom RPC layer on top of TCP/IP sockets. All of the service layer methods you use in the MapGuide API? They're all basically RPC calls sent over TCP/IP for the MapGuide Server to invoke its server-side eqvivalent. Most of the other classes that you pass into these service methods are essentially messages that are serialized/deserialized through the TCP/IP sockets. When you think about it, the MapGuide Web API is merely an RPC client for the MapGuide Server, which itself is an RPC server that does the actual work
  • We use Berkeley DBXML for the storage of all our XML-based resources
    • We have an object-oriented subset of these resource types (Feature Sources, Layer Definitions, Map Definitions, Symbol Definitions) in the MdfModel library with XML serialization/parsing code in the MdfParser library
    • Our Rendering and Stylization Engine work off of these MdfModel classes to render the maps that you see on your viewer
  • We use xerces for XML reading/writing XML in and out of DBXML
  • We use a custom modified (and somewhat ancient) version of SWIG to generate wrappers for our RPC client so that you can talk to the MapGuide Server in:
    • .net
    • Java
    • PHP
So why do I mention all of this?

I mention this, because I've recently been checking out gRPC, a cross-platform, cross-language RPC framework from Google.

And from what I've seen so far, gRPC could easily replace and simplify most of the technology stack we're currently using for MapGuide:
  • ACE? gRPC is the RPC framework! The only reason we'd keep ACE around would be for multi-threading facilities, but the C++ standard library at this point would be adequate enough to replace that as well
  • DBXML/MdfModel/xerces? gRPC is driven by Google Protocol Buffers.
    • Protobuf messages are strongly typed classes that serialize/deserialize into compact binary streams and is more efficient and faster than slinging around XML. Ever bemoan the fact you have to currently work with XML to manipulate maps/layers/etc? In .net you are reprieved if you use the Maestro API (where we provide strongly-typed classes for all the resource XML types), but for the other languages you have to figure out how to use the XML APIs/services provided by Java/PHP to work with the XML blobs that the MapGuide API gives and expects. With protobuf, you have none of these problems.
    • Protobuf messages can evolve in a backward-compatible manner
    • Because protobuf messages are already strongly-typed classes, it makes MdfModel/MdfParser redundant if you get the Rendering/Stylization engine to work against protobuf messages for maps/layers/symbols/styles/etc
    • If we ever wanted to add support for Mapbox Vector Tiles (which seems to be the de-facto vector tile format), well the spec is protobuf-based so ...
    • Protobuf would mean we no longer deal in XML, so we don't need Xerces for reading/writing XML and DBXML as the storage database (and all its cryptic error messages that can bubble up from the Resource Service APIs) can be replaced with something simpler. We may not even need a database at this point. Dumping protobuf messages to a structured file system could probably be a simpler solution
  • SWIG? gRPC and protobuf can already generate service stubs and protobuf message classes in the languages we currently target:
    • .net
    • Java
    • PHP
    • And if we wanted, we can also instantly generate a gRPC-based MapGuide API for:
      • node.js
      • Ruby
      • Python
      • C++
      • Android Java
      • Objective-C
      • Go
    • The best thing about this? All of this generated code is portable in their respective platforms and doesn't involve native code interop through "flattened" interfaces of C code wrapping the original C++ code, which is what SWIG ultimately does for any language we want to generate wrapper bindings out of. If it does involve native code interop, it's a concern that is taken care of by the respective gRPC/protobuf implementation for that language.
  • Combine a gRPC-based MapGuide Server with grpc-gateway and we'd have an instant REST API to easily build a client-side map viewer out of
  • gRPC works at a scale that is way beyond what we can currently achieve with MapGuide currently. After all, this is what Google uses themselves for building their various services
If what I said above doesn't make much sense, consider a practical example.

Say we had our Feature Service (which as a user of the MapGuide API, you should be familiar with) as a gRPC service Definition

// Message definitions for the request/response types below are omitted for brevity but basically every request and
// response type mentioned below will have eqvivalent protobuf message classes automatically generated along with
// the service

// Provides an abstraction layer for the storage and retrieval of feature data in a technology-independent way.
// The API lets you determine what storage technologies are available and what capabilities they have. Access
// to the storage technology is modeled as a connection. For example, you can connect to a file and do simple
// insertions or connect to a relational database and do transaction-based operations.
service FeatureService {
// Creates or updates a feature schema within the specified feature source.
// For this method to actually delete any schema elements, the matching elements
// in the input schema must be marked for deletion
rpc ApplySchema (ApplySchemaRequest) returns (BasicResponse);
rpc BeginTransaction (BeginTransactionRequest) returns (BeginTransactionResponse);
// Creates a feature source in the repository identified by the specified resource
// identifier, using the given feature source parameters.
rpc CreateFeatureSource (CreateFeatureSourceRequest) returns (BasicResponse);
rpc DeleteFeatures (DeleteFeaturesRequest) returns (DeleteFeaturesResponse);
// Gets the definitions of one or more schemas contained in the feature source for particular classes.
// If the specified schema name or a class name does not exist, this method will throw an exception.
rpc DescribeSchema (DescribeSchemaRequest) returns (DescribeSchemaResponse);
// This method enumerates all the providers and if they are FDO enabled for the specified provider and partial connection string.
rpc EnumerateDataStores (EnumerateDataStoresRequest) returns (EnumerateDataStoresResponse);
// Executes SQL statements NOT including SELECT statements.
rpc ExecuteSqlNonQuery (ExecuteSqlNonQueryRequest) returns (ExecuteSqlNonQueryResponse);
// Executes the SQL SELECT statement on the specified feature source.
rpc ExecuteSqlQuery (ExecuteSqlQueryRequest) returns (stream DataRecord);
// Gets the capabilities of an FDO Provider
rpc GetCapabilities (GetCapabilitiesRequest) returns (GetCapabilitiesResponse);
// Gets the class definition for the specified class
rpc GetClassDefinition (GetClassDefinitionRequest) returns (GetClassDefinitionResponse);
// Gets a list of the names of all classes available within a specified schema
rpc GetClasses (GetClassesRequest) returns (GetClassesResponse);
// Gets a set of connection values that are used to make connections to an FDO provider that permits multiple connections.
rpc GetConnectionPropertyValues (GetConnectionPropertyValuesRequest) returns (GetConnectionPropertyValuesResponse);
// Gets a list of the available FDO providers together with other information such as the names of the connection properties for each provider
rpc GetFeatureProviders (GetFeatureProvidersRequest) returns (GetFeatureProvidersResponse);
// Gets the locked features.
rpc GetLockedFeatures (GetLockedFeaturesRequest) returns (stream FeatureRecord);
// Gets all available long transactions for the provider
rpc GetLongTransactions (GetLongTransactionsRequest) returns (GetLongTransactionsResponse);
// This method returns all of the logical to physical schema mappings for the specified provider and partial connection string
rpc GetSchemaMapping (GetSchemaMappingRequest) returns (GetSchemaMappingResponse);
// Gets a list of the names of all of the schemas available in the feature source
rpc GetSchemas (GetSchemasRequest) returns (GetSchemasResponse);
// Gets all of the spatial contexts available in the feature source
rpc GetSpatialContexts (GetSpatialContextsRequest) returns (GetSpatialContextsResponse);
// Inserts a new feature into the specified feature class of the specified Feature Source
rpc InsertFeatures (InsertFeaturesRequest) returns (stream FeatureRecord);
// Selects groups of features from a feature source and applies filters to each of the groups according to the criteria set in the aggregate query option supplied
rpc SelectAggregate (SelectAggregateRequest) returns (stream DataRecord);
// Selects features from a feature source according to the criteria set in the query options provided
rpc SelectFeatures (SelectFeaturesRequest) returns (stream FeatureRecord);
// Set the active long transaction name for a feature source
rpc SetLongTransaction (SetLongTransactionRequest) returns (BasicResponse);
// Connects to the Feature Provider specified in the connection string
rpc TestConnection (TestConnectionRequest) returns (TestConnectionResponse);
// Executes commands contained in the given command set
rpc UpdateFeatures (UpdateFeaturesRequest) returns (UpdateFeaturesResponse);
// Updates all features that match the given filter with the specified property values
rpc UpdateMatchingFeatures (UpdateMatchingFeaturesRequest) returns (UpdateMatchingFeaturesResponse);
}

Running this service definition through the protoc compiler with grpc plugin gives us:
  • Auto-generated (and strongly-typed) protobuf classes for all the messages. ie: The request and response types for this service
  • An auto-generated FeatureService gRPC client ready to use in the language of our choice
  • An auto-generated gRPC server stub for FeatureService in the language of our choice ready for us to "fill in the blanks". For practical purposes, we'd generate this part in C++ and fill in the blanks by mapping the various service operations to their respective FDO APIs and its return values to our gRPC responses.
And at this point, we'd just need a simple C++ console program that bootstraps gRPC/FDO, registers our gRPC service implementation, start the gRPC server on a particular port and we'd have a functional Feature Service implementation in gRPC. Our auto-generated Feature Service client can connect to this host and port to immediately start talking to it.

The only real work is the "filling in the blanks" on the server part. Everything else is taken care of for us.

Extrapolate this to the rest of our services (Resource, Rendering, etc) and we basically have a gRPC-based MapGuide Server.

Also filling in the blanks is a conceptually simple exercise as well:
  • Feature Service - Pass down the APIs in FDO.
  • Rendering Service - Setup up FDO queries based on map/layers visible and pass query results to the Rendering/Stylization engine.
  • Resource Service - Read/write protobuf resources to some kind of persistent storage. It doesn't have to be something complex like DBXML, it can be as simple as a file system (that's what mg-desktop does for its resource service implementation btw)
  • Tile Service - It's just like the rendering service, but you're asking the Rendering/Stylization engine to render tile-sized content.
  • KML Service - Just like rendering service, but you're asking the Rendering/Stylization engine to render KML documents instead of images.
  • Drawing Service - Do we still care about DWF support? Well if we have to support this, it's just passing down to the APIs in DWF Toolkit.
  • Mapping Service - It's a mish-mash of tapping into the Rendering/Stylization engine and/or the DWF Toolkit.
  • Profiling Service - Just tap into whatever tracing/instrumentation APIs provided by gRPC.
Now because gRPC is cross-language, nothing says we have to use C++ for all the service implementations, it's just that most of the libraries and APIs we'd be mapping into are already in C++, so in practical terms we'd stick to the same language as well.

Front this with grpc-gateway, and we basically have our RESTful mapagent to build a map viewer against.

There's still a few unknowns:
  • How do we model file uploads/downloads?
  • Can server-side service implementations call other services?
Google's vast set of gPRC definitions for their various web services can answer the first part. The other part, will need more playing around.

The thought of a gRPC-based MapGuide Server is very exciting!
Categories: OSGeo Planet

gvSIG Team: gvSIG nominado en 3 categorías de los “Open Awards”. Abierta votación popular.

OSGeo Planet - Tue, 2017-03-21 15:45

El proyecto gvSIG vuelve a ver reconocida su propuesta de desarrollo de soluciones de geomática libre con la nominación a 3 categorías de los premios denominados “Open Awards”.

Tal y como se indica en la web de los premios, estos tienen como objetivo reconocer públicamente a empresas, administraciones, personalidades y comunidades que crean, apoyan y fomentan grandes soluciones con tecnologías Open Source y Software Libre.

Los Open Awards reconocen y premian los proyectos e iniciativas de código abierto que más han destacado durante el último año, impulsan la comunicación y la notoriedad pública de las empresas, proyectos y administraciones participantes en los premios y valoran el trabajo realizado por todos ellos.

Las 3 categorías en las que participa gvSIG reflejan en cierto modo la mirada amplia del proyecto, ya que se opta al “Mejor proveedor de servicios/soluciones” donde la Asociación gvSIG está demostrando que pueden ponerse en marcha nuevos modelos económicos de producción de software desde perspectivas colaborativas; a la “Plataforma/proyecto más innovador” una categoría que se centra en la parte técnico-científica del proyecto y donde gvSIG Suite se constituye como la plataforma integral para solventar las necesidades “geo” de cualquier organización, con aplicaciones de escritorio, móviles y web siguiendo la filosofía de las Infraestructuras de Datos Espaciales; por último la categoría donde más ilusión nos hace participar y que en realidad es un reconocimiento a todos vosotros, los que día a día, desde vuestro rincón apoyáis y ayudáis a gvSIG, la nominación a “Mejor comunidad tecnológica”.

En estos premios hay una primera fase de votaciones públicas de la que saldrá un “top cinco”. A partir de ahí un jurado determinará los ganadores de cada categoría.

¿Qué pasos hay que seguir para votad?

  1. Votad en las categorías que consideres (desde hoy hasta el 30 de abril).
  2. Confirmad voto por email.

Como aliciente adicional, por votar se tiene acceso exclusivo al eBook “OpenExpo Tendencias Open Source y Software Libre 2017”.

Los enlaces directos a las categorías donde está gvSIG nominado son:

Desde ya os agradecemos a todos vosotros vuestro apoyo y tened seguro que nosotros también os votaremos como “Mejor comunidad tecnológica”.

Categories: OSGeo Planet

GeoSolutions: GeoServer Code Sprint needs you

OSGeo Planet - Tue, 2017-03-21 12:22

GeoServer

Dear Reader,

everything is ready in GeoSolutions for next week's GeoServer code sprint which will take place in our offices during the week of March 27th.

[caption id="attachment_3380" align="aligncenter" width="531"]Sprint 2017 Sprint 2017[/caption]

The main focus will be on refactoring GeoServer's REST API towards a more modern approach (se this page for some insights). A number of GeoServer developers from various organizations will gather from all over world for this work and your support in funding this initiative would help us out with the expenses, therefore, we are asking help to all our readers. Sponsorship opportunities are available for you to contribute on the OSGeo wiki.

Happy sprinting to everybody! The GeoSolutions team,
Categories: OSGeo Planet

gvSIG Team: Aprende los secretos del geoprocesamiento vectorial en gvSIG con este vídeo-tutorial

OSGeo Planet - Tue, 2017-03-21 09:01

En gvSIG Desktop disponemos de más de 350 geoprocesos, esto sin contar con plugins como el recientemente anunciado de Jgrass. Un buen porcentaje de esos geoprocesos se aplican sobre capas vectoriales, desde los más comunes -como el área de influencia, cortar, unir,…- a otros más específicos y menos conocidos.
Hoy os presentamos un vídeo-tutorial en el que en pocos minutos aprenderéis el funcionamiento de los geoprocesos de gvSIG, mediante una serie de ejercicios prácticos que nos permitan comprender la sencillez con la que podemos utilizar los algoritmos disponibles en la aplicación.
En la parte final del vídeo-tutorial podréis aprender a manejar el modelador de geoprocesos; una herramienta muy útil y no muy conocida por los usuarios de gvSIG.
Seguid leyendo…


Filed under: gvSIG Desktop, spanish Tagged: geoprocesamiento, geoprocesos, modelador
Categories: OSGeo Planet

GeoSolutions: New release of MapStore 2 with theming support

OSGeo Planet - Mon, 2017-03-20 11:03

blog

Dear Reader,

we are pleased to announce a new release of MapStore 2, our flagship Open Source webgis product, which we have called 2017.02.00. The full list of changes for this release can be found here, but let us now concentrate on the latest most interesting additions.

Advanced Theming The main Feature of this release is the possibility to have different themes, as shown in the Gallery below. [gallery type="slideshow" ids="3319,3320,3324,3325,3326,3321,3327,3328,3329,3330"]

MapStore 2 was conceived with the goal to be highly customizable, therefore we have worked hard on the look and feel from the beginning to create a product that would easily adapt to predefined graphical guidelines as well as a framework which could be easy to integrate with 3rd party applications.

With this release the goal has been achieved. We have refactored the original theme simplifying greatly the steps to create new themes and switch between them. You can try to switch it live from the home page, there is a specific combo with some predefined styles.

On the technical side, we have refactored MapStore 2 theme support using less, hence now creating your own theme to match your company's visual design guidelines is very simple. We are developing an example that allows you to customize your theme directly from the web page, you can see it here below or test it here.

[gallery type="slideshow" ids="3333,3334"] Balloon Tutorial The balloon tutorial is now ready also with html support. You can try it live by clicking on "Tutorial" in the map's burger menu, as an instance in this map. [gallery type="slideshow" ids="3336,3337,3338,3339,3340"]   Notes for Developers

This release has a number of changes that are crucial to know for the developers since they break compatibility with older versions. Here, you can find the details of what we updated and how to migrate your application.

We strongly believe that these changes will speed up the development and improve the quality and the readability of the code (particularly redux-observable). If you will find yourself struggling with these changes, reach out for us on the developer mailing list and we will help you out.

We also looked around for a tool to produce developers' docs that would satisfy our needs in the longer term.  We found in docma a great tool as it allows us to provide both generic guides as well as to document our components, plugins and JavaScript API inline using jsDoc + Markdown. You can find the current version of the developer documentation here. Twitter Account

MapStore 2 has now its own twitter account which it is using to let us know how it feels as well as to share useful information and insights.

What we are working on for the next release

The main focus for the next release is the implementation of a JavaScript API to allow you to include MapStore 2 in your application or web-site and interact with it in more advanced ways than a simple IFRAME. We are also going to focus on the following items:

  • Improve developer's documentation
  • Improve the management of Maps, in order to allow users to manage them also from the map itself
  • Better interaction with WFS

In the longer term, we have a number of features and functionalities in our plans like editing, advanced templating, styling, OAUTH 2.0, and more…

So, Stay tuned and happy webmapping!

If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapstore, GeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions team,
Categories: OSGeo Planet

gvSIG Team: Disponible vídeo-tutorial para aprender Geoestadística con gvSIG

OSGeo Planet - Wed, 2017-03-15 09:06

El paquete estadístico R es uno de los más flexibles, potentes y profesionales que existen actualmente para realizar tareas estadísticas de todo tipo, desde las más elementales, hasta las más avanzadas. Y, lo más importante, es software libre.

Desde sus últimas versiones, gvSIG Desktop ha incluido plugins para integrar R, abriendo así la posibilidad a realizar todo tipo de análisis geoestadísticos.

Geólogos, biólogos, ecólogos, agrónomos, ingenieros, meteorólogos, sociólogos…por nombrar sólo unos pocos profesionales, requieren del estudio de información estadístico de información georreferenciada.

Desde la Asociación gvSIG os presentamos un vídeo-tutorial que os permitirá introduciros en el funcionamiento de la dupla gvSIG-R.

Si hemos despertado tú interés, sigue leyendo…


Filed under: gvSIG Desktop, spanish Tagged: Geoestadística, r
Categories: OSGeo Planet

GeoServer Team: GeoServer 2.11-RC1 Released

OSGeo Planet - Tue, 2017-03-14 07:23

We are happy to announce the release of GeoServer 2.11-RC1. Downloads are available (zipwardmg and exe) along with docs and extensions.

This is a release candidate of GeoServer not intended for production use. This release is made in conjunction with GeoTools 16-RC1 and GeoWebCache 1.11-RC1.

Thanks to everyone who provided feedback, bug reports and fixes. Here are some of the changes included in 2.11-RC1:

  • Incompatibilities with GeoFence and Control-flow have been resolved
  • Empty WFS Transactions (which perform no changes) no indicating everything has changed
  • Improvements to WFS GetFeature support for 3D BBOX requests
  • We have one known regression with the windows service installer (memory setting is incorrect)
  • Please additional details see the release notes (2.11-RC12.11-beta )
Release Candidate Testing

The 2.11 release is expected in March, this release candidate is a “dry run” where we confirm new functionality is working as expected and double check the packaging and release process.

Please note that GeoServer 2.9 has reached its end-0f-life. If your organization has not yet upgraded please give us hand by evaluating 2.11-RC1 and providing feedback and your experiences for the development team. This is a win/win situation where your participation can both assist the GeoServer team and reduce your risk when upgrading.

Corrected default AnchorPoint for LabelPlacement

An issue with SLD 1.0 rendering has been fixed – when a LabelPlacement did not include a AnchorPoint we were using the wrong default!

  • BEFORE: default anchor point was X=0.5 and Y=0.5 – which is at the middle height and middle length of the label text.
  • AFTER: default anchor point was X=0.0 and Y=0.5 – which is at the middle height of the lefthand
    side of the label text.

This is a long standing issue that was only just noticed in February. If you need to “restore” the incorrect behaviour please startup with -Dorg.geotools.renderer.style.legacyAnchorPoint=true system property.

Startup Performance

With extensive improvements to startup performance and OGC requests for large installations we are looking forward to feedback from your experience testing.

About GeoServer 2.11

GeoServer 2.11 is scheduled for March 2017 release. This puts GeoServer back on our six month “time boxed” release schedule.

  • OAuth2 for GeoServer (GeoSolutions)
  • YSLD has graduated and is now available for download as a supported extension
  • Vector tiles has graduate and is now available for download as an extension
  • The rendering engine continues to improve with underlying labels now available as a vendor option
  • A new “opaque container” layer group mode can be used to publish a basemap while completely restricting access to the individual layers.
  • Layer group security restrictions are now available
  • Latest in performance optimizations in GeoServer (GeoSolutions)
  • Improved lookup of EPSG codes allows GeoServer to automatically match EPSG codes making shapefiles easier to import into a database (or publish individually).
Categories: OSGeo Planet

Even Rouault: Dealing with huge vector GeoPackage databases in GDAL/OGR

OSGeo Planet - Sat, 2017-03-11 22:25
Recently, I've fixed a bug in the OGR OpenFileGDB driver, the driver made from the reverse engineering the ESRI FileGeoDatabase format, so as to be able to read tables whose section that enumerates and describes fields is located beyond the first 4 GB of the file. This table from the 2016 TIGER database is indeed featuring all linear edges of the USA and is 15 GB large (feature and spatial indexes included), with 85 million features.

Some time before, I had to deal with a smaller database - 1.7 GB as GeoPackage - with 5.4 million polygons (bounding box) from the cadastre of an Italian province. One issue I noticed is that when you want to get the summary of the layer, with ogrinfo -al -so the.gpkg, it was very slow. The reason is that this summary includes the feature count, and there's no way to get this metadata quickly, apart from running the "SELECT COUNT(*) FROM the_table" request, which causes a full scan of the table. For small databases, this runs fast, but when going into the gigabyte realm, this can take several dozains of seconds. But getting the spatial extent of the layer, which is one of the other information displayed by the summary mode of ogrinfo, is fast since the gpkg_contents "system" table of a GeoPackage database includes the bounding box of the table. So my idea was to extend the definition of the gpkg_contents table to add a new column, ogr_feature_count, to store the feature count. I went to implement that, and it worked fine. The synchronization of the value of ogr_feature_count after edits can be done with 2 SQLite triggers, on row insertion and deletion, and that  works with implementations that are not aware of the existence of this new column. Like older OGR versions. Unfortunately it appears that at least one other implementation completely rejected such databases. There is some inconsistency in the GeoPackage specification if additional columns are accepted or not in system tables. From the /base/core/contents/data/table_def test case, "Column order, check constraint and trigger definitions, and other column definitions in the returned sql are irrelevant.", it would seem that additional columns should still be considered as a valid GeoPackage. Anyway, that's only the theory and we don't want to break interoperability for just a nice-to-have feature... So I went to change the design a bit and created a new table, gpkg_ogr_contents, with a table_name and feature_count columns. I'm aware that I should not borrow the gpkg_ prefix, but I felt it was safer to do so since other implementations will probably ignore any unknown gpkg_ prefixed table. And the addition of the ogr_ prefix makes collisions with future extension of the GeoPackage specification unlikely. The content of this table is also maintained in synchronization with the data table thanks to two triggers, and this makes the other software that rejected my first attempt happy. Problem solved.

Let's come back to our 13 GB FileGeoDatabase. My first attempt to convert is to GeoPackage with ogr2ogr resulted in converting the features in about half an hour, but once this 100% stage reached, the finalization, which includes building the spatial index took ages. So long, that after a whole night it wasn't yet finished and seriously making the computer non responsive due to massive I/O activity. In the GeoPackage driver, the spatial index is indeed created after feature insertion, so that the feature table and spatial index tables are well separated in the file, and from previous experiment with the Spatialite driver, it proved to be the right strategy. Populating the SQLite R-Tree is done with a simple statement: INSERT INTO my_rtree SELECT fid, ST_MinX(geom), ST_MaxX(geom), ST_MinY(geom), ST_MaxY(geom) FROM the_table. Analyzing what happens in the SQLite code is not easy when you are not familiar with that code base, but my intuition is that there was constant back and forth between the geometry data area and the RTree area in the file, making the SQLite page cache inefficient. So I decided to experiment with a more progressive approach. Let's iterate over the feature table and collect the fid, minx, max, miny, maxy by chunks of 100 000 rows, and the insert those 100 000 bounding boxes into the R-Tree, and loop again unil we have completely read the feature table. With such a strategy, the spatial index can now be built in 4h30. The resulting GeoPackage file weights 31.6 GB, so twice at large than the FileGeoDatabase. One of the reasons for the difference must be due to geometries in FileGeoDatabase being compressed (quantization for coordinate precision, delta encoding and use of variable integer) whereas GeoPackage uses a uncompressed SQLite BLOB based on OGC WKB.
My first attempt at opening it in QGIS resulted in the UI to be frozen, probably for hours. The reason is that QGIS always issues a spatial filter, even when requesting on a area of interest that is at least as large as the extent of the layer, where there is no performance gain to expect from using it. So the first optimization was in the OGR GeoPackage to detect that situation and to not translate the OGR spatial filter as SQLite R-Tree filter. QGIS could now open the database and progressively displays the features. Unfortunately when zooming in, the UI became frozen again. When applying a spatial filter, the GeoPackage driver created a SQL request like the following one:
SELECT * FROM the_table WHERE fid IN        (SELECT id FROM the_rtree WHERE         xmin <= bbox_xmax AND xmax >= bbox_xmin AND        ymin <= bboy_ymay AND ymay >= bboy_ymin)
It turns out that the sub-select (the one that fetches the feature ID from the spatial index) is apparently entirely run before the outer select (the one that returns geometry and attributes) starts being evaluated. This way of expressing the spatial filter came from the Spatialite driver (since GeoPackage and Spatialite use the exact same mechanisms for spatial indexing), itself based on examples from an old Spatialite tutorial. For not too big databases, this runs well. After some experiment, it turns out that doing a JOIN between the feature table and the RTree virtual table makes it possible to have a non blocking request:
SELECT * FROM the_table t JOIN the_rtree r ON t.fid = r.idWHERE r.xmin <= bbox_xmax AND r.xmax >= bbox_xmin AND      r.ymin <= bboy_ymax AND r.ymax >= bboy_ymin
Now QGIS is completely responsive, although I find that even on high zoom levels, the performance is somehow disappointing, ie features appear rather slowly. There seems to be some threshold effect on the size of the database, since the performance is rather good on the Italian province cadastral use case.

Another experiment showed that increasing the SQLite page size from 1024 bytes (the default in SQLite 3.11 or earlier) to 4096 bytes (the default since SQLite 3.12) decreases the database size to 28.8 GB. This new page size of 4096 bytes is now used by default by the OGR SQLite and GPKG drivers (unless OGR_SQLITE_PRAGMA=page_size=xxxx is specified as a configuration option).

I also discovered that increasing the SQLite page cache from its 2 MB default to 2 GB (with --config OGR_SQLITE_CACHE 2000) significantly improved the time to build the spatial index, decreasing the total conversion time from 4h30 to 2h10. 2GB is just a value selected at random. It might be too large or perhaps a larger value would help further.

All improvements mentionned above (faster spatial index creation, better use of spatial index and change of default page size) are now in GDAL trunk, and will be available in the upcoming 2.2.0 release.
Categories: OSGeo Planet

gvSIG Team: 3rd Catedra gvSIG Contest

OSGeo Planet - Thu, 2017-03-09 18:34

The aim of the Cátedra gvSIG is to create a meeting point for users interested in free space technologies. In order to foment an environment of shared knowledge and participating in the dissemination of free geomatics, the chair organizes this international contest to encourage all gvSIG users and free Geographic Information Systems users to share and give visibility to their work.

Students and graduates in high school, professional training and university, as well as university professors and researchers from all countries can participate in this contest. To enter to the competition you must meet the following requirements: Works must be done with free Geographic Information Systems and the subject of the work may address any area of knowledge. Works may have been made in 2016 or later, the papers may be presented collectively and individually and jobs may be sent in Spanish, Valencian or English.

In the event the work is based on a new development done through free and open source GIS geospatial technologies, these papers must be subjected to GNU / GPL v3 license. Among the selected works a prize of 500 euros will be awarded for each of the following categories:

  • Work produced by students of highschool or professional training.

  • Final University’s Project (Bachelor, Degree or Master).

  • Doctoral thesis or research paper.

Submissions should be sent to gvsig@umh.es and press@gvsig.com no later than November 1, 2017. Selected documents will be published in the repository of the Cátedra gvSIG UMH. The jury will evaluate the methodology, clarity and innovative nature of the work, assessing as well the relevance and applicability of the research.

Winners will be announced in the next International gvSIG Conference.


Filed under: english, events, press office Tagged: awards, contest, open source
Categories: OSGeo Planet

GeoServer Team: REST API Code Sprint Prep

OSGeo Planet - Thu, 2017-03-09 15:34

In our previous blog post we highlighted the GeoServer Code Sprint 2017 taking place at the of this month. We are all looking forward to GeoSolutions hosting us in beautiful Tuscany and have lots of work to do.

One of the secrets (and this comes as no surprise) to having a successful code sprint is being prepared. With this year’s REST API migration from restlet to spring model-view-controller we want to have all technical decisions made, and examples for the developers to work from, prior to any boots hitting the ground in Italy.

But before we get into the details …

Code Sprint Sponsors

We would like to thank our sprint sponsors – we are honoured that so many organizations have stepped up world wide to fund this activity.

Gaia3D is a professional software company in the field of geospatial information and Earth science technology. We would like to thank Gaia3D for their gold sponsorship.

Gaia3d

Insurance Australia Group (IAD) is our second gold sponsor. This is a great example of open source being used, and supported, by an engineering team. Thanks to Hugh Saalmans and the Location Engineering team at IAD for your support.

iag_logo

Boundless is once again sponsoring the GeoServer team. Boundless provides a commercially supported open source GIS platform for desktop, server, mobile and cloud. Thanks to Quinn Scripter and the Boundless suite team for their gold sponsorship.

 

Boundless_Logo

How 2 Map is pleased to support this year’s event with a bronze sponsorship.

How2map_logo

I am overjoyed FOSSGIS (German local OSGeo chapter) is supporting us with a bronze sponsorship. This sponsorship means a lot to us as the local chapter program focuses on users and developers; taking the time to support our project directly is a kind gesture.

fossgis_logo

 

Sponsorship Still Needed

While we have a couple of verbal commitments to sponsor – we are still $1500 USD off the pace. If your organization has some capacity to financially support this activity we would dearly love your support.

This is an official OSGeo activity; any excess money is returned to the foundation to help the next open source sprint.  OSGeo sponsorship is cumulative. Check their website for details on how your helping out the geoserver team can be further recognized.

For sponsorship details visit the wiki page (or contact Jody Garnett for assistance).

Update: Since this post was published we are happy to announce new sponsor(s).

Thanks to Caroline Chanlon and the team at Atol Conseils et Développements for bronze sponsorship.

atol_logo

Update: Thanks to David Ghedini (acugis.com) and others donating smaller amounts via the OSGeo paypal button.

Getting Ready for REST

In this week’s GeoServer meeting we had a chance to sit down and plan out the steps needed to get ready.

The majority of prep will go into performing the restlet to spring mvc migration for a sample REST API end point to produce a “code example” for developers to follow. We have selected the rest/styles endpoint as one of the easier examples:

  1. Preflight check: Before we start we want to have a good baseline of the current REST API responses. We would like to double check that each endpoint has a JUnit test case that checks the response against a reference file. Most of our tests just count the number of elements, or drill into the content to look for a specific value. The goal is to use these reference files as a quick “regression test” when performing the migration.
  2. Migrate rest/styles from StyleResource (restlet) to StyleController (spring): This should be a bit of fun, part of why spring model-view-controller was selected. Our goal is to have one Controller per end-point, and configure the controller using annotations directly in the Java file. This ends up being quite readable with variable names being taken directly out of the URL path. It is also easier to follow since you do not have to keep switching between XML and Java files to figure out what is going on.  It is important that the example is “picture perfect” as it will be used as a template by the developers over the course of the sprint, and will be an example of the level of quality we expect during the activity.
    code-example
  3. Create StyleInfo bindings (using XStream for xml and json generation): The above method returns a StyleInfo data structure, our current restlet solutions publishes each “resource” using the XStream library. We think we can adapt our XStream work for use in spring model-view-controller by configuring a binding for StyleInfo and implementing in using XStream.  This approach is the key reason we are confident in this migration being a success; existing clients that depend on exactly the same output from GeoServer – should get exactly the same output.
  4. StyleController path management: There is some work to configure each controller, while we have the option of doing some custom logic inside each controller we would like to keep this to a minimum.  This step is the small bit of applicationContext.xml configuration work we need to do for each controller, we expect it to be less work then reslet given the use of annotations.
  5. Reference Documentation Generation: We are looking into a tool called swagger for documentation generation. Our current reference documentation only lists each end-point (and does not provide information on the request and response expected – leaving users to read the examples or try out the api in an ad-hoc fashion). See screen snap below, our initial experience is positive, but the amount of work required is intimidating.
    swagger-editor
  6. Updated examples for cURL and Python: We would like to rewrite our examples in a more orderly fashion to make sure both XML and JSON sample requests and responses are provided. Ideally we will inline the “reference files” from the JUnit regression test in step 1 to ensure that the documentation is both accurate and up to date.

You can see a pretty even split in our priorities between performing the migration, and updating the documentation. We believe both of these goals need to be met for success.

Next stop Tuscany

Although this blog post focuses on the sponsorship/planning/logistics side of setting up a code sprint there is one group without whom this event could not happen – our sprint participants and in-kind sponsors (providing a venue & staff).

Thanks to GeoSolutions for hosting us, and to Astun, Boundless, GeoSolutions for the hands-on participation that makes this sprint possible

For more information:

Categories: OSGeo Planet

gvSIG Team: Geoprocesamiento desde Scripting en gvSIG. Vídeo-tutorial disponible.

OSGeo Planet - Thu, 2017-03-09 10:01

Tras la publicación del webinar “Aprende a programar en gvSIG en media hora” os presentamos un complemento perfecto “Geoprocesamiento desde Scripting en gvSIG”.

Por Geoprocesamiento entendemos a las operaciones de tratamiento o manipulación de datos espaciales realizadas en un Sistema de Información Geográfica. gvSIG, con más de 350 geoprocesos, tiene un potencial enorme como software de geoprocesamiento. Potencial que puede ampliarse gracias al scripting.

En este nuevo webinar podréis aprender a acceder desde scripting a los distintos geoprocesos de gvSIG (y de otras librerías) por medio de una librería denominada gvPy, ejecutar geoprocesos con una simple línea de código, convertir modelos de geoprocesos en scripts,…y todo ello lo veréis -tras una breve introducción teórica- mediante ejercicios prácticos.

Si hemos despertado vuestro interés, seguid leyendo…


Filed under: gvSIG Desktop, spanish Tagged: geoprocesamiento, geoprocesos, gvPy, python, scripting, webinar
Categories: OSGeo Planet

SourcePole: FOSSGIS 2017 in Passau

OSGeo Planet - Wed, 2017-03-08 08:19

In zwei Wochen beginnt die alljährliche deutschsprachige FOSSGIS Konferenz zum Theme Open Source GIS und OpenStreetMap in Passau. Vom 22.-25. März 2017 wird die FOSSGIS-Konferenz mit Untersützung der Universität Passau in der Dreiflüssestadt Passau stattfinden. Die FOSSGIS-Konferenz ist die im D-A-CH-Raum führende Konferenz für Freie und Open Source Software für Geoinformationssysteme sowie für die Themen OpenStreetMap und Open Data. An vier Tagen werden in Vorträgen für Einsteiger und Experten, Hands-On Workshops und Anwendertreffen Einblicke in neuste Entwicklungen und Anwendungsmöglichkeiten von Softwareprojekten gegeben.

 Tobias Hobmeier (CC-BY-SA)

Sourcepole ist wieder mit einem Stand vertreten und lädt zu interessanten Workshops und Vorträgen ein:

  • Mittwoch 17:00 - Workshop Entwicklung von QGIS Plugins
  • Donnerstag 14:30 (HS 9) - QGIS Server Projektstatus
  • Donnerstag 14:30 (HS 11) - Von WMS zu WMTS zu Vektor-Tiles
  • Donnerstag 15:00 (HS 9) - QGIS Web Client 2

Das gesamte Programm ist auch als praktische Android App erhältlich und die Online-Anmeldung ist noch bis am 19.3. offen.

Wir freuen uns auf eine interessante Konferenz!

Categories: OSGeo Planet

gvSIG Team: Learning GIS with Game of Thrones (X): Legends

OSGeo Planet - Wed, 2017-03-08 06:20

Today we are going to learn about how to change the symbology of a layer, reviewing different types of legends that are available in gvSIG Desktop.

The symbology is one of the most important properties of a layer. gvSIG includes a great variety of options to represent layers with symbols, graphs and colours. Excepting unique symbol option, the rest of legends are assigned to each element depending on their attribute values and the properties of the type of legend selected.

By default, when a layer is added to a View, it’s represented with a unique symbol with random colour, that means, all the elements of the layer are represented with the same symbol. To modify the symbology of a layer we have to access to its “Properties” window and select “Symbology” tab. We are going to open our “Game of Thrones” project and we will start to explore this part of gvSIG Desktop.

If we want to change a symbol, the easiest way is double-clicking on it at the ToC (Table of Contents with the list of layers). A new window will be opened to select the new symbol. For example we are going to double-click on the symbol of the “Rivers” layer.

At the new window we can change the colour and the width of the line, press on any of the symbol libraries installed (“gvSIG Basic” by default, although we can install a lot of libraries from the Add-ons Manager). In this case we are going to change width to 3 and select a dark blue colour. We press “Accept” to apply changes. 075_got

Now we are going to see the type of legends that are available and we will select a legend by the different types of locations, attribute that we have used in the previous posts. There are a lot of possibilities for symbology, and you can see this additional documentation.

Firstly we will have to open the “Properties” window of the layer. Activating the layer we will find this option at the “Layer/Properties” menu, or directly using the secondary button of the mouse on it. 

Now we access to the “Symbology” tab and a window is shown with the symbology applied. At the left side we can find all the types of symbols than we can use. Warning: Depending on the type of layer (point, line or polygon) we can find different legends available.

In this case we are going to select a legend about “Categories/Unique values”. This type of legend is used to assign a symbol to each unique value specified at the attribute table of the layer. Each element is drawn depending on the value of an attribute that identifies the category. In our case we will select “Type” for classification field; we press “Add all” and it will show the legend created by default:

The Labels (at the right side) can be modified. You can change the texts here.

Now, double-clicking on every symbol a new window will be opened where we can modify them or select new symbols from our symbol libraries with “Select symbol” option. Once they are selected we press “Apply” and we will see the results in our View. 

The best way to learn different type of legends is testing… We also recommend you to install and check the different symbol libraries that are available in gvSIG (hundreds of symbols of all types!!)

See you in the next post…


Filed under: english, gvSIG Desktop, training Tagged: Game of Thrones, legends, symbology
Categories: OSGeo Planet

gvSIG Team: Aprendiendo SIG con Juego de Tronos (XV y final): Instalación de complementos

OSGeo Planet - Wed, 2017-03-08 05:58

Dedicaremos este último post al “Administrador de complementos”, una herramienta que todo usuario de gvSIG Desktop debería conocer.

El administrador de complementos es una funcionalidad que permite personalizar gvSIG, instalando nuevas extensiones, ya sean funcionales o de otro tipo (bibliotecas de símbolos). Se ejecuta desde el menú “Herramientas/Administrador de complementos”, aunque también se puede acceder a él durante el proceso de instalación.

Gracias al “Administrador de complementos” podéis acceder, además de a plugins no instalados por defecto, a todas las nuevas herramientas que se vayan publicando.

En la ventana que aparece lo primero que debéis seleccionar es la fuente de instalación de los complementos:

Los complementos pueden tener 3 orígenes:

  • El propio binario de instalación. El archivo de instalación que nos hemos descargado contiene un gran número de complementos o plugins, algunos de los cuales no se instalan por defecto, pero están disponibles para su instalación. Esto permite poder personalizar gvSIG sin disponer de conexión a internet.

  • Instalación a partir de archivo. Podemos tener un archivo con un conjunto de extensiones listas para instalarse en gvSIG.

  • A partir de URL. Mediante una conexión a Internet podemos acceder a todos los complementos disponibles en el servidor de gvSIG e instalar aquellos que necesitemos.  Es la opción recomendada si se quieren consultar todos los plugins disponibles.

Una vez seleccionada la fuente de instalación, pulsáis el botón de “Siguiente”, lo que nos mostrará el listado de complementos disponibles.

La interfaz del administrador de complementos se divide en 4 partes:

  1. Listado de complementos disponibles. Se indica el nombre del complemento, la versión y el tipo. Las casillas de verificación permiten diferenciar entre complementos ya instalados (color verde) y disponibles (color blanco). Puede ser interesante que revises el significado de cada uno de los iconos.

  2. Área de información referente al complemento seleccionado en “1”.

  3. Área que muestra las “Categorías” y “Tipos” en que se clasifican los complementos. Pulsando en los botones de “Categorías” y “Tipos” se actualiza la información de esta columna. Al seleccionar una categoría o tipo del listado se ejecuta un filtro que mostrará en “1” solo los complementos relacionados con esa categoría o tipo.

  4. Filtro rápido. Permite realizar un filtro a partir de una cadena de texto que introduzca el usuario.

En nuestro caso vamos a instalar una nueva biblioteca de símbolos. Para ello pulsaremos en la categoría “Symbols”, lo que nos filtrará entre los plugins que son “bibliotecas de símbolos”:

A continuación marcamos la biblioteca “G-Maps”:

Pulsamos el botón “Siguiente” y, una vez acabada la instalación, el botón “Terminar”. Un mensaje nos indicará que es necesario reiniciar (en el caso de instalar plugins funcionales es así, pero no es necesario cuando instalamos bibliotecas de símbolos).

Si ahora vamos a cambiar la simbología de alguna de nuestras capas, por ejemplo “Locations”, veremos que ya tenemos los nuevos símbolos disponibles:

Podéis echar un vistazo a las bibliotecas de símbolos disponibles en la documentación.

Y con este último post acabamos este atípico curso de introducción a los SIG. Esperamos que hayáis aprendido y, además, os haya resultado tan divertido como a nosotros hacerlo.

A partir de aquí ya estáis preparados para profundizar en la aplicación e ir descubriendo toda su potencia. Un último consejo…utilizad las lista de usuarios para consultar cualquier duda o comunicarnos cualquier problema con el que os encontréis:

http://www.gvsig.com/es/comunidad/listas-de-correo

Y recordad…gvSIG is coming!

 


Filed under: gvSIG Desktop, spanish Tagged: administrador de complementos, bibliotecas de símbolos, extensiones, Juego de tronos, plugins
Categories: OSGeo Planet

Jackie Ng: React-ing to the need for a modern MapGuide viewer (Part 14): The customization story so far.

OSGeo Planet - Tue, 2017-03-07 14:40
I've been getting an increasing amount of questions lately about "How do you do X?" with mapguide-react-layout. So the purpose of this post is to lay out the customization story so far, so you have a good idea of whether the thing you want to do with this viewer is possible or not.

Before I start, it's best to divide this customization story into two main categories:

  1. Customizations that reside "inside" the viewer
  2. Customizations that reside "outside" the viewer
What is the distinction? Read on.
Customizations "inside" the viewer
I define customizations "inside" the viewer as customizations:
  • That require no modifications to the entry point HTML file that initializes and starts up the viewer. To use our other viewer offerings as an analogy, your customizations work with the AJAX/Fusion viewers as-is without embedding the viewer or modifying any of the template HTML.
  • That are represented as commands that reside in either a toolbar or menu/sub-menu and registered/referenced in your Web Layout or Application Definition
  • Whose main UI reside in the Task Pane or a floating or popup window and uses client-side APIs provided by the viewer for interacting with the map.
These customizations are enabled in our existing viewer offerings through:
  • InvokeURL commands/widgets
  • InvokeScript commands/widgets
  • Client-side viewer APIs that InvokeURL and InvokeScript commands can use 
  • Custom widgets

    From the perspective of mapguide-react-layout, here is what's supported
    InvokeURL commands
    InvokeURL commands are fully supported and do what you expect from our existing viewer offerings:
    • Load a URL (that normally renders some custom UI for displaying data or interacting with the map) into the Task Pane or a floating/popup window.
    • It is selection state aware if you choose to set the flag in the command definition.
    • It will include whatever parameters you have specified in the command definition into the URL that is invoked.
    If most/all of your customizations are delivered through InvokeURL commands, then mapguide-react-layout already has you covered.
    InvokeScript commands
    InvokeScript commands are not supported and I have no real plans to bring such support across. I have an alternate replacement in place, which will require you to roll your own viewer. 
    Client-side viewer APIs
    If you use AJAX viewer APIs in your Task Pane content for interacting with the map, they are supported here as well. Most of the viewer APIs are mostly implemented, short of a few esoteric APIs.
    If your client-side code is primarily interacting with APIs provided by Fusion, you're out of luck at the moment as none of the Fusion client-side APIs have been ported across. I have no plans to port these APIs across 1:1, though I do intend to bring across some kind of pub/sub event system so your client-side code has the ability to respond to events like selection changed, etc.
    Custom Widgets
    In Fusion, if InvokeURL/InvokeScript widgets are insufficient for your customization needs, this is where you would create a custom widget. Like the replacement for InvokeScript commands I intend to enable a similar system once again through custom builds of the mapguide-react-layout viewer.


    My personal barometer for how well mapguide-react-layout supports "inside" customizations is the MapGuide PHP Developer's Guide samples. 


    If you load the Web Layout for this sample in the mapguide-react-layout viewer, you will see all of the examples (and the viewer APIs they demonstrate) all work as before. If your customizations are similar in nature to what is demonstrated in the MapGuide PHP Developer's Guide samples, then things should be smooth sailing.

    Customizations "outside" the viewer
    I define customizations "outside" the viewer as primarily being one of 2 things:
    • Embedding the viewer in a frame/iframe or a DOM element that is not full width/height and providing sufficient APIs so that code in the embedding content document can interact with the viewer or for code in the embedding content document to be able to listen on certain viewer events.
    • Being able to init the viewer with all the required configuration (ie. You do not intend to pass a Web Layout or Application Definition to init this viewer)
    On this front, mapguide-react-layout doesn't offer much beyond a well-defined entry point to init and mount the viewer component.
    Watch this space for how I hope to tackle this problem.
    Rolling your own viewer
    The majority of the work done since the last release is to enable the scenario of being able to roll your own viewer. By being able to roll your own viewer, you will have full control over viewer customization for things the default viewer bundle does not support, such as:
    • Creating your own layout templates
    • Creating your own script commands
    • Creating your own components
    If you do decide to go down this path, there will be some things that you should become familiar with:
    • You are familiar with the node.js ecosystem. In particular, you know how to use npm/yarn
    • You are familiar with webpack
    • Finally, you are familiar with TypeScript and have some experience with React and Redux
    Basically, if you go down this road you should have a basic idea of how frontend web development is done in the current year of 2017, because it is no longer manually editing HTML files, script tags and sprinkles of jQuery.
    Because what I intend to do allow for this scenario is to publish the viewer as an npm module. To roll your own viewer, you would npm/yarn install the mapguide-react-layout module, write your custom layouts/commands/components in TypeScript, and then set up a webpack configuration to pull it all together into your own custom viewer bundle.
    I hope to have an example project available (probably in a different GitHub repository) when this is ready that demonstrates how to do this.
    In Closing
    When you ask the question of "How can I do X?" in mapguide-react-layout, you should reframe the question in terms of whether the thing you are trying to do is "inside" or "outside" the viewer. If it is "inside" the viewer and you were able to do this in the past with the AJAX/Fusion viewers through the extension points and APIs offered, chances are very high that similar equivalent functionality has already been ported across.
    If you are trying to do this "outside" the viewer. you'll have to wait for me to add whatever APIs and extension points are required.
    Failing that, you will have the ability to consume the viewer as an npm module and roll your own viewer with your specific customizations.
    Failing that? 
    You could always fork the GitHub repo and make whatever modifications you need. But you should not have to go that far.
    Categories: OSGeo Planet

    gvSIG Team: Aprende a programar en gvSIG en media hora

    OSGeo Planet - Tue, 2017-03-07 10:16

    Es frecuente que desde la Asociación gvSIG impartamos talleres de scripting en las diversas jornadas gvSIG que se realizan alrededor del mundo. Y es interesante asistir a estos talleres de “observador” porque permite ver como entran alumnos sin ningún conocimiento de programación en gvSIG Desktop y salen con la base necesaria para poder comenzar a desarrollar sus propios scripts en la aplicación.

    Ese es uno de los objetivos principales del scripting, dar a todo tipo de usuarios -no necesariamente programadores- un mecanismo para que de forma muy sencilla puedan desarrollar aplicaciones o herramientas sobre gvSIG Desktop.

    ¿Tan, tan sencillo como para aprender scripting en media hora?

    Ese es el reto que nuestro compañero de la Asociación gvSIG, Óscar Martínez, se ha planteado con el webinar que realizamos en la Universidad Miguel Hernández y del que ahora tenéis disponible el vídeo.

    Reservad media hora de vuestro tiempo y seguid leyendo...


    Filed under: gvSIG Desktop, spanish Tagged: desarrollo, jython, python, quick start, scripting, tutorial
    Categories: OSGeo Planet

    Stefano Costa: Numbering boxes of archaeological items, barcodes and storage management

    OSGeo Planet - Tue, 2017-03-07 08:23

    Last week a tweet from the always brilliant Jolene Smith inspired me to write down my thughts and ideas about numbering boxes of archaeological finds. For me, this includes also thinking about the physical labelling, and barcodes.

    Question for people who organize things for their job. I'm giving a few thousand boxes unique IDs. should I go random or sequential?

    — Jolene Smith (@aejolene) March 3, 2017

    The question Jolene asks is: should I use sequential or random numbering? To which many answered: use sequential numbering, because it bears significance and can help detecting problems like missing items, duplicates, etc. Furthermore, if the number of items you need to number is small (say, a few thousands), sequential numbering is much more readable than a random sequence. Like many other archaeologists faced with managing boxes of items, I have chosen to use sequential numbering in the past. With 200 boxes and counting, labels were easily generated and each box had an associated web page listing the content, with a QR code providing a handy link from the physical label to the digital record. This numbering system was put in place during 3 years of fieldwork in Gortyna and I can say that I learned a few things in the process. The most important thing is that it’s very rare to start from scratch with the correct approach: boxes were labeled with a description of their content for 10 years before I adopted the numbering system pictured here. This sometimes resulted in absurdly long labels, easily at risk of being damaged, difficult to search since no digital recording was made. I decided a numbering system was needed because it was difficult to look for specific items, after I had digitised all labels with their position in the storage building (this often implied the need to number shelves, corridors, etc.). The next logical thing was therefore to decouple the labels from the content listing ‒ any digital tool was good here, even a spreadsheet. Decoupling box number from description of content allowed to manage the not-so-rare case of items moved from one box to another (after conservation, or because a single stratigraphic context was excavated in multiple steps, or because a fragile item needs more space …), and the other frequent case of data that is augmented progressively (at first, you put finds from stratigraphic unit 324 in it, then you add 4.5 kg of Byzantine amphorae, 78 sherds of cooking jars, etc.). Since we already had a wiki as our knowledge base, it made sense to use that, creating a page for each box and linking from the page of the stratigraphic unit or that of the single item to the box page (this is done with Semantic MediaWiki, but it doesn’t matter). Having a URL for each box I could put a QR code on labels: the updated information about the box content was in one place (the wiki) and could be reached either via QR code or by manually looking up the box number. I don’t remember the details of my reasoning at the time, but I’m happy I didn’t choose to store the description directly inside the QR code ‒ so that scanning the barcode would immediately show a textual description instead of redirecting to the wiki ‒ because that would require changing the QR code on each update (highly impractical), and still leave the information unsearchable. All this is properly documented and nothing is left implicit. Sometimes you will need to use larger boxes, or smaller ones, or have some items so big that they can’t be stored inside any container: you can still treat all of these cases as conceptual boxes, number and label them, give them URLs.

    QR codes used for boxes of archaeological items in Gortyna

    There are limitations in the numbering/labelling system described above. The worst limitation is that in the same building (sometimes on the same shelf) there are boxes from other excavation projects that don’t follow this system at all, and either have a separate numbering sequence or no numbering at all, hence the “namespacing” of labels with the GQB prefix, so that the box is effectively called GQB 138 and not 138. I think an efficient numbering system would be one that is applied at least to the scale of one storage building, but why stop there?

    Turning back to the initial question, what kind of numbering should we use? When I started working at the Soprintendenza in Liguria, I was faced with the result of no less than 70 years of work, first in Ventimiglia and then in Genoa. In Ventimiglia, each excavation area got its “namespace” (like T for the Roman theater) and then a sequential numbering of finds (leading to items identified as T56789) but a single continuous sequential sequence for the numbering of boxes in the main storage building. A second, newer building was unfortunately assigned a separate sequence starting again from 1 (and insufficient namespacing). In Genoa, I found almost no numbering at all, despite (or perhaps, because of) the huge number of unrelated excavations that contributed to a massive amount of boxes. Across the region, there are some 50 other buildings, large and small, with boxes that should be recorded and accounted for by the Soprintendenza (especially since most archaeological finds are State property in Italy). Some buildings have a numbering sequence, most have paper registries and nothing else. A sequential numbering sequence seems transparent (and allows some neat tricks like the German tanks problem), since you could potentially have an ordered list and look up each number manually, which you can’t do easily with a random number. You also get the impression of being able to track gaps in a sequence (yes, I do look for gaps in numeric sequences all the time), thus spotting any missing item. Unfortunately, I have been bitten too many times by sequential numbers that turned out to have horrible bis suffixes, or that were only applied to “standard” boxes leaving out oversized items.

    On the other hand, the advantages of random numbering seem to increase linearly with the number of separate facilities ‒ I could replace random with non-transparent to better explain the concept. A good way to look at the problem is perhaps to ask whether numbering boxes is done as part of a bookkeeping activity that has its roots in paper registries, or it is functional to the logistics of managing cultural heritage items in a modern and efficient way.

    Logistics. Do FedEx, UPS, Amazon employees care what number sequence they use to track items? Does the cashier at the supermarket care whether the EAN barcode on your shopping items is sequential? I don’t know, but I do know that they have a very efficient system in place, in which human operators are never required to actually read numerical IDs (but humans are still capable of checking whether the number on the screen is the same as the one printed on the label). There are many types of barcode used to track items, both 1D and 2D, all with their pros and cons. I also know of some successful experiments with RFID for archaeological storage boxes (in the beautiful depots at Ostia, for example), that can record numbers up to 38 digits.

    Based on all the reflections of the past years, my idea for a region- or state-wide numbering+labeling system is as follows (in RFC-style wording):

    1. it MUST use a barcode as the primary means of reading the numerical ID from the box label
    2. the label MUST contain both the barcode and the barcode content as human-readable text
    3. it SHOULD use a random numeric sequence
    4. it MUST use a fixed-length string of numbers
    5. it MUST avoid the use of any suffixes like a, b, bis

    In practice, I would like to use UUID4 together with a barcode.

    A UUID4 looks like this: 1b08bcde-830f-4afd-bdef-18ba918a1b32. It is the UUID version of a random number, it can be generated rather easily, works well with barcodes and has a collision probability that is compatible with the scale I’m concerned with ‒ incidentally I think it’s lower than the probability of human error in assigning a number or writing it down with a pencil or a keyboard. The label will contain the UUID string as text, and the barcode. There will be no explicit URL in the barcode, and any direct link to a data management system will be handled by the same application used to read the barcode (that is, a mobile app with an embedded barcode reader). The data management system will use UUID as part of the URL associated with each box. You can prepare labels beforehand and apply them to boxes afterwards, recording all the UUIDs as you attach the labels to the boxes. It doesn’t sound straightforward, but in practice it is.

    And since we’re deep down the rabbit hole, why stop at the boxes? Let’s recall some of the issues that I described non-linearly above:

    1. the content of boxes is not immutable: one day item X is in box Y, the next day it gets moved to box Z
    2. the location of boxes is not immutable: one day box Y is in room A of building B, the next day it gets moved to room C of building D
    3. both #1 and #2 can and will occur in bulk, not only as discrete events

    The same UUIDs can be applied in both directions in order to describe the location of each item in a large bottom-up tree structure (add as many levels as you see fit, such as shelf rows and columns):

    item X → box Y → shelf Z → room A → building B

    or:

    b68e3e61-e0e7-45eb-882d-d98b4c28ff31 → 3ef5237e-f837-4266-9d85-e08d0a9f4751 3ef5237e-f837-4266-9d85-e08d0a9f4751 → 77372e8c-936f-42cf-ac95-beafb84de0a4 77372e8c-936f-42cf-ac95-beafb84de0a4 → e895f660-3ddf-49dd-90ca-e390e5e8d41c e895f660-3ddf-49dd-90ca-e390e5e8d41c → 9507dc46-8569-43f0-b194-42601eb0b323

    Now imagine adding a second item W to the same box: since the data for item Y was complete, one just needs to fill one container relationship:

    b67a3427-b5ef-4f79-b837-34adf389834f → 3ef5237e-f837-4266-9d85-e08d0a9f4751

    and since we would have already built our hypothetical data management system, this data is filled into the system just by scanning two barcodes on a mobile device that will sync as soon as a connection is available. Moving one box to another shelf is again a single operation, despite many items actually moved, because the leaves and branches of the data tree are naïve and only know about their parents and children, but know nothing about grandparents and siblings.

    There are a few more technical details about data structures needed to have a decent proof of concept, but I already wrote down too many words that are tangential to the initial question of how to number boxes.

    Categories: OSGeo Planet
    Syndicate content