The aim of the Cátedra gvSIG is to create a meeting point for users interested in free space technologies. In order to foment an environment of shared knowledge and participating in the dissemination of free geomatics, the chair organizes this international contest to encourage all gvSIG users and free Geographic Information Systems users to share and give visibility to their work.
Students and graduates in high school, professional training and university, as well as university professors and researchers from all countries can participate in this contest. To enter to the competition you must meet the following requirements: Works must be done with free Geographic Information Systems and the subject of the work may address any area of knowledge. Works may have been made in 2016 or later, the papers may be presented collectively and individually and jobs may be sent in Spanish, Valencian or English.
In the event the work is based on a new development done through free and open source GIS geospatial technologies, these papers must be subjected to GNU / GPL v3 license. Among the selected works a prize of 500 euros will be awarded for each of the following categories:
Work produced by students of highschool or professional training.
Final University’s Project (Bachelor, Degree or Master).
Doctoral thesis or research paper.
Submissions should be sent to email@example.com and firstname.lastname@example.org no later than November 1, 2017. Selected documents will be published in the repository of the Cátedra gvSIG UMH. The jury will evaluate the methodology, clarity and innovative nature of the work, assessing as well the relevance and applicability of the research.
Winners will be announced in the next International gvSIG Conference.
Filed under: english, events, press office Tagged: awards, contest, open source
In our previous blog post we highlighted the GeoServer Code Sprint 2017 taking place at the of this month. We are all looking forward to GeoSolutions hosting us in beautiful Tuscany and have lots of work to do.
One of the secrets (and this comes as no surprise) to having a successful code sprint is being prepared. With this year’s REST API migration from restlet to spring model-view-controller we want to have all technical decisions made, and examples for the developers to work from, prior to any boots hitting the ground in Italy.
But before we get into the details …Code Sprint Sponsors
We would like to thank our sprint sponsors – we are honoured that so many organizations have stepped up world wide to fund this activity.
Gaia3D is a professional software company in the field of geospatial information and Earth science technology. We would like to thank Gaia3D for their gold sponsorship.
Insurance Australia Group (IAD) is our second gold sponsor. This is a great example of open source being used, and supported, by an engineering team. Thanks to Hugh Saalmans and the Location Engineering team at IAD for your support.
Boundless is once again sponsoring the GeoServer team. Boundless provides a commercially supported open source GIS platform for desktop, server, mobile and cloud. Thanks to Quinn Scripter and the Boundless suite team for their gold sponsorship.
How 2 Map is pleased to support this year’s event with a bronze sponsorship.
I am overjoyed FOSSGIS (German local OSGeo chapter) is supporting us with a bronze sponsorship. This sponsorship means a lot to us as the local chapter program focuses on users and developers; taking the time to support our project directly is a kind gesture.
Sponsorship Still Needed
While we have a couple of verbal commitments to sponsor – we are still $1500 USD off the pace. If your organization has some capacity to financially support this activity we would dearly love your support.
This is an official OSGeo activity; any excess money is returned to the foundation to help the next open source sprint. OSGeo sponsorship is cumulative. Check their website for details on how your helping out the geoserver team can be further recognized.
Update: Since this post was published we are happy to announce new sponsor(s).
Thanks to Caroline Chanlon and the team at Atol Conseils et Développements for bronze sponsorship.
In this week’s GeoServer meeting we had a chance to sit down and plan out the steps needed to get ready.
The majority of prep will go into performing the restlet to spring mvc migration for a sample REST API end point to produce a “code example” for developers to follow. We have selected the rest/styles endpoint as one of the easier examples:
- Preflight check: Before we start we want to have a good baseline of the current REST API responses. We would like to double check that each endpoint has a JUnit test case that checks the response against a reference file. Most of our tests just count the number of elements, or drill into the content to look for a specific value. The goal is to use these reference files as a quick “regression test” when performing the migration.
- Migrate rest/styles from StyleResource (restlet) to StyleController (spring): This should be a bit of fun, part of why spring model-view-controller was selected. Our goal is to have one Controller per end-point, and configure the controller using annotations directly in the Java file. This ends up being quite readable with variable names being taken directly out of the URL path. It is also easier to follow since you do not have to keep switching between XML and Java files to figure out what is going on. It is important that the example is “picture perfect” as it will be used as a template by the developers over the course of the sprint, and will be an example of the level of quality we expect during the activity.
- Create StyleInfo bindings (using XStream for xml and json generation): The above method returns a StyleInfo data structure, our current restlet solutions publishes each “resource” using the XStream library. We think we can adapt our XStream work for use in spring model-view-controller by configuring a binding for StyleInfo and implementing in using XStream. This approach is the key reason we are confident in this migration being a success; existing clients that depend on exactly the same output from GeoServer – should get exactly the same output.
- StyleController path management: There is some work to configure each controller, while we have the option of doing some custom logic inside each controller we would like to keep this to a minimum. This step is the small bit of applicationContext.xml configuration work we need to do for each controller, we expect it to be less work then reslet given the use of annotations.
- Reference Documentation Generation: We are looking into a tool called swagger for documentation generation. Our current reference documentation only lists each end-point (and does not provide information on the request and response expected – leaving users to read the examples or try out the api in an ad-hoc fashion). See screen snap below, our initial experience is positive, but the amount of work required is intimidating.
- Updated examples for cURL and Python: We would like to rewrite our examples in a more orderly fashion to make sure both XML and JSON sample requests and responses are provided. Ideally we will inline the “reference files” from the JUnit regression test in step 1 to ensure that the documentation is both accurate and up to date.
You can see a pretty even split in our priorities between performing the migration, and updating the documentation. We believe both of these goals need to be met for success.Next stop Tuscany
Although this blog post focuses on the sponsorship/planning/logistics side of setting up a code sprint there is one group without whom this event could not happen – our sprint participants and in-kind sponsors (providing a venue & staff).
For more information:
Tras la publicación del webinar “Aprende a programar en gvSIG en media hora” os presentamos un complemento perfecto “Geoprocesamiento desde Scripting en gvSIG”.
Por Geoprocesamiento entendemos a las operaciones de tratamiento o manipulación de datos espaciales realizadas en un Sistema de Información Geográfica. gvSIG, con más de 350 geoprocesos, tiene un potencial enorme como software de geoprocesamiento. Potencial que puede ampliarse gracias al scripting.
En este nuevo webinar podréis aprender a acceder desde scripting a los distintos geoprocesos de gvSIG (y de otras librerías) por medio de una librería denominada gvPy, ejecutar geoprocesos con una simple línea de código, convertir modelos de geoprocesos en scripts,…y todo ello lo veréis -tras una breve introducción teórica- mediante ejercicios prácticos.
Si hemos despertado vuestro interés, seguid leyendo…
Filed under: gvSIG Desktop, spanish Tagged: geoprocesamiento, geoprocesos, gvPy, python, scripting, webinar
In zwei Wochen beginnt die alljährliche deutschsprachige FOSSGIS Konferenz zum Theme Open Source GIS und OpenStreetMap in Passau. Vom 22.-25. März 2017 wird die FOSSGIS-Konferenz mit Untersützung der Universität Passau in der Dreiflüssestadt Passau stattfinden. Die FOSSGIS-Konferenz ist die im D-A-CH-Raum führende Konferenz für Freie und Open Source Software für Geoinformationssysteme sowie für die Themen OpenStreetMap und Open Data. An vier Tagen werden in Vorträgen für Einsteiger und Experten, Hands-On Workshops und Anwendertreffen Einblicke in neuste Entwicklungen und Anwendungsmöglichkeiten von Softwareprojekten gegeben.
Sourcepole ist wieder mit einem Stand vertreten und lädt zu interessanten Workshops und Vorträgen ein:
- Mittwoch 17:00 - Workshop Entwicklung von QGIS Plugins
- Donnerstag 14:30 (HS 9) - QGIS Server Projektstatus
- Donnerstag 14:30 (HS 11) - Von WMS zu WMTS zu Vektor-Tiles
- Donnerstag 15:00 (HS 9) - QGIS Web Client 2
Wir freuen uns auf eine interessante Konferenz!
Today we are going to learn about how to change the symbology of a layer, reviewing different types of legends that are available in gvSIG Desktop.
The symbology is one of the most important properties of a layer. gvSIG includes a great variety of options to represent layers with symbols, graphs and colours. Excepting unique symbol option, the rest of legends are assigned to each element depending on their attribute values and the properties of the type of legend selected.
By default, when a layer is added to a View, it’s represented with a unique symbol with random colour, that means, all the elements of the layer are represented with the same symbol. To modify the symbology of a layer we have to access to its “Properties” window and select “Symbology” tab. We are going to open our “Game of Thrones” project and we will start to explore this part of gvSIG Desktop.
If we want to change a symbol, the easiest way is double-clicking on it at the ToC (Table of Contents with the list of layers). A new window will be opened to select the new symbol. For example we are going to double-click on the symbol of the “Rivers” layer.
At the new window we can change the colour and the width of the line, press on any of the symbol libraries installed (“gvSIG Basic” by default, although we can install a lot of libraries from the Add-ons Manager). In this case we are going to change width to 3 and select a dark blue colour. We press “Accept” to apply changes.
Now we are going to see the type of legends that are available and we will select a legend by the different types of locations, attribute that we have used in the previous posts. There are a lot of possibilities for symbology, and you can see this additional documentation.
Firstly we will have to open the “Properties” window of the layer. Activating the layer we will find this option at the “Layer/Properties” menu, or directly using the secondary button of the mouse on it.
Now we access to the “Symbology” tab and a window is shown with the symbology applied. At the left side we can find all the types of symbols than we can use. Warning: Depending on the type of layer (point, line or polygon) we can find different legends available.
In this case we are going to select a legend about “Categories/Unique values”. This type of legend is used to assign a symbol to each unique value specified at the attribute table of the layer. Each element is drawn depending on the value of an attribute that identifies the category. In our case we will select “Type” for classification field; we press “Add all” and it will show the legend created by default:
The Labels (at the right side) can be modified. You can change the texts here.
Now, double-clicking on every symbol a new window will be opened where we can modify them or select new symbols from our symbol libraries with “Select symbol” option. Once they are selected we press “Apply” and we will see the results in our View.
The best way to learn different type of legends is testing… We also recommend you to install and check the different symbol libraries that are available in gvSIG (hundreds of symbols of all types!!)
See you in the next post…
Filed under: english, gvSIG Desktop, training Tagged: Game of Thrones, legends, symbology
Dedicaremos este último post al “Administrador de complementos”, una herramienta que todo usuario de gvSIG Desktop debería conocer.
El administrador de complementos es una funcionalidad que permite personalizar gvSIG, instalando nuevas extensiones, ya sean funcionales o de otro tipo (bibliotecas de símbolos). Se ejecuta desde el menú “Herramientas/Administrador de complementos”, aunque también se puede acceder a él durante el proceso de instalación.
Gracias al “Administrador de complementos” podéis acceder, además de a plugins no instalados por defecto, a todas las nuevas herramientas que se vayan publicando.
Los complementos pueden tener 3 orígenes:
El propio binario de instalación. El archivo de instalación que nos hemos descargado contiene un gran número de complementos o plugins, algunos de los cuales no se instalan por defecto, pero están disponibles para su instalación. Esto permite poder personalizar gvSIG sin disponer de conexión a internet.
Instalación a partir de archivo. Podemos tener un archivo con un conjunto de extensiones listas para instalarse en gvSIG.
A partir de URL. Mediante una conexión a Internet podemos acceder a todos los complementos disponibles en el servidor de gvSIG e instalar aquellos que necesitemos. Es la opción recomendada si se quieren consultar todos los plugins disponibles.
Una vez seleccionada la fuente de instalación, pulsáis el botón de “Siguiente”, lo que nos mostrará el listado de complementos disponibles.
Listado de complementos disponibles. Se indica el nombre del complemento, la versión y el tipo. Las casillas de verificación permiten diferenciar entre complementos ya instalados (color verde) y disponibles (color blanco). Puede ser interesante que revises el significado de cada uno de los iconos.
Área de información referente al complemento seleccionado en “1”.
Área que muestra las “Categorías” y “Tipos” en que se clasifican los complementos. Pulsando en los botones de “Categorías” y “Tipos” se actualiza la información de esta columna. Al seleccionar una categoría o tipo del listado se ejecuta un filtro que mostrará en “1” solo los complementos relacionados con esa categoría o tipo.
Filtro rápido. Permite realizar un filtro a partir de una cadena de texto que introduzca el usuario.
Pulsamos el botón “Siguiente” y, una vez acabada la instalación, el botón “Terminar”. Un mensaje nos indicará que es necesario reiniciar (en el caso de instalar plugins funcionales es así, pero no es necesario cuando instalamos bibliotecas de símbolos).
Podéis echar un vistazo a las bibliotecas de símbolos disponibles en la documentación.
Y con este último post acabamos este atípico curso de introducción a los SIG. Esperamos que hayáis aprendido y, además, os haya resultado tan divertido como a nosotros hacerlo.
A partir de aquí ya estáis preparados para profundizar en la aplicación e ir descubriendo toda su potencia. Un último consejo…utilizad las lista de usuarios para consultar cualquier duda o comunicarnos cualquier problema con el que os encontréis:
Y recordad…gvSIG is coming!
Filed under: gvSIG Desktop, spanish Tagged: administrador de complementos, bibliotecas de símbolos, extensiones, Juego de tronos, plugins
Jackie Ng: React-ing to the need for a modern MapGuide viewer (Part 14): The customization story so far.
Before I start, it's best to divide this customization story into two main categories:
- Customizations that reside "inside" the viewer
- Customizations that reside "outside" the viewer
Customizations "inside" the viewer
I define customizations "inside" the viewer as customizations:
- That require no modifications to the entry point HTML file that initializes and starts up the viewer. To use our other viewer offerings as an analogy, your customizations work with the AJAX/Fusion viewers as-is without embedding the viewer or modifying any of the template HTML.
- That are represented as commands that reside in either a toolbar or menu/sub-menu and registered/referenced in your Web Layout or Application Definition
- Whose main UI reside in the Task Pane or a floating or popup window and uses client-side APIs provided by the viewer for interacting with the map.
- InvokeURL commands/widgets
- InvokeScript commands/widgets
- Client-side viewer APIs that InvokeURL and InvokeScript commands can use
- Custom widgets
InvokeURL commands are fully supported and do what you expect from our existing viewer offerings:
- Load a URL (that normally renders some custom UI for displaying data or interacting with the map) into the Task Pane or a floating/popup window.
- It is selection state aware if you choose to set the flag in the command definition.
- It will include whatever parameters you have specified in the command definition into the URL that is invoked.
InvokeScript commands are not supported and I have no real plans to bring such support across. I have an alternate replacement in place, which will require you to roll your own viewer.
Client-side viewer APIs
If you use AJAX viewer APIs in your Task Pane content for interacting with the map, they are supported here as well. Most of the viewer APIs are mostly implemented, short of a few esoteric APIs.
If your client-side code is primarily interacting with APIs provided by Fusion, you're out of luck at the moment as none of the Fusion client-side APIs have been ported across. I have no plans to port these APIs across 1:1, though I do intend to bring across some kind of pub/sub event system so your client-side code has the ability to respond to events like selection changed, etc.
In Fusion, if InvokeURL/InvokeScript widgets are insufficient for your customization needs, this is where you would create a custom widget. Like the replacement for InvokeScript commands I intend to enable a similar system once again through custom builds of the mapguide-react-layout viewer.
My personal barometer for how well mapguide-react-layout supports "inside" customizations is the MapGuide PHP Developer's Guide samples.
If you load the Web Layout for this sample in the mapguide-react-layout viewer, you will see all of the examples (and the viewer APIs they demonstrate) all work as before. If your customizations are similar in nature to what is demonstrated in the MapGuide PHP Developer's Guide samples, then things should be smooth sailing.
Customizations "outside" the viewer
I define customizations "outside" the viewer as primarily being one of 2 things:
- Embedding the viewer in a frame/iframe or a DOM element that is not full width/height and providing sufficient APIs so that code in the embedding content document can interact with the viewer or for code in the embedding content document to be able to listen on certain viewer events.
- Being able to init the viewer with all the required configuration (ie. You do not intend to pass a Web Layout or Application Definition to init this viewer)
Watch this space for how I hope to tackle this problem.
Rolling your own viewer
The majority of the work done since the last release is to enable the scenario of being able to roll your own viewer. By being able to roll your own viewer, you will have full control over viewer customization for things the default viewer bundle does not support, such as:
- Creating your own layout templates
- Creating your own script commands
- Creating your own components
- You are familiar with the node.js ecosystem. In particular, you know how to use npm/yarn
- You are familiar with webpack
- Finally, you are familiar with TypeScript and have some experience with React and Redux
Because what I intend to do allow for this scenario is to publish the viewer as an npm module. To roll your own viewer, you would npm/yarn install the mapguide-react-layout module, write your custom layouts/commands/components in TypeScript, and then set up a webpack configuration to pull it all together into your own custom viewer bundle.
I hope to have an example project available (probably in a different GitHub repository) when this is ready that demonstrates how to do this.
When you ask the question of "How can I do X?" in mapguide-react-layout, you should reframe the question in terms of whether the thing you are trying to do is "inside" or "outside" the viewer. If it is "inside" the viewer and you were able to do this in the past with the AJAX/Fusion viewers through the extension points and APIs offered, chances are very high that similar equivalent functionality has already been ported across.
If you are trying to do this "outside" the viewer. you'll have to wait for me to add whatever APIs and extension points are required.
Failing that, you will have the ability to consume the viewer as an npm module and roll your own viewer with your specific customizations.
You could always fork the GitHub repo and make whatever modifications you need. But you should not have to go that far.
Es frecuente que desde la Asociación gvSIG impartamos talleres de scripting en las diversas jornadas gvSIG que se realizan alrededor del mundo. Y es interesante asistir a estos talleres de “observador” porque permite ver como entran alumnos sin ningún conocimiento de programación en gvSIG Desktop y salen con la base necesaria para poder comenzar a desarrollar sus propios scripts en la aplicación.
Ese es uno de los objetivos principales del scripting, dar a todo tipo de usuarios -no necesariamente programadores- un mecanismo para que de forma muy sencilla puedan desarrollar aplicaciones o herramientas sobre gvSIG Desktop.
¿Tan, tan sencillo como para aprender scripting en media hora?
Ese es el reto que nuestro compañero de la Asociación gvSIG, Óscar Martínez, se ha planteado con el webinar que realizamos en la Universidad Miguel Hernández y del que ahora tenéis disponible el vídeo.
Reservad media hora de vuestro tiempo y seguid leyendo...
Filed under: gvSIG Desktop, spanish Tagged: desarrollo, jython, python, quick start, scripting, tutorial
Last week a tweet from the always brilliant Jolene Smith inspired me to write down my thughts and ideas about numbering boxes of archaeological finds. For me, this includes also thinking about the physical labelling, and barcodes.
Question for people who organize things for their job. I'm giving a few thousand boxes unique IDs. should I go random or sequential?
— Jolene Smith (@aejolene) March 3, 2017
The question Jolene asks is: should I use sequential or random numbering? To which many answered: use sequential numbering, because it bears significance and can help detecting problems like missing items, duplicates, etc. Furthermore, if the number of items you need to number is small (say, a few thousands), sequential numbering is much more readable than a random sequence. Like many other archaeologists faced with managing boxes of items, I have chosen to use sequential numbering in the past. With 200 boxes and counting, labels were easily generated and each box had an associated web page listing the content, with a QR code providing a handy link from the physical label to the digital record. This numbering system was put in place during 3 years of fieldwork in Gortyna and I can say that I learned a few things in the process. The most important thing is that it’s very rare to start from scratch with the correct approach: boxes were labeled with a description of their content for 10 years before I adopted the numbering system pictured here. This sometimes resulted in absurdly long labels, easily at risk of being damaged, difficult to search since no digital recording was made. I decided a numbering system was needed because it was difficult to look for specific items, after I had digitised all labels with their position in the storage building (this often implied the need to number shelves, corridors, etc.). The next logical thing was therefore to decouple the labels from the content listing ‒ any digital tool was good here, even a spreadsheet. Decoupling box number from description of content allowed to manage the not-so-rare case of items moved from one box to another (after conservation, or because a single stratigraphic context was excavated in multiple steps, or because a fragile item needs more space …), and the other frequent case of data that is augmented progressively (at first, you put finds from stratigraphic unit 324 in it, then you add 4.5 kg of Byzantine amphorae, 78 sherds of cooking jars, etc.). Since we already had a wiki as our knowledge base, it made sense to use that, creating a page for each box and linking from the page of the stratigraphic unit or that of the single item to the box page (this is done with Semantic MediaWiki, but it doesn’t matter). Having a URL for each box I could put a QR code on labels: the updated information about the box content was in one place (the wiki) and could be reached either via QR code or by manually looking up the box number. I don’t remember the details of my reasoning at the time, but I’m happy I didn’t choose to store the description directly inside the QR code ‒ so that scanning the barcode would immediately show a textual description instead of redirecting to the wiki ‒ because that would require changing the QR code on each update (highly impractical), and still leave the information unsearchable. All this is properly documented and nothing is left implicit. Sometimes you will need to use larger boxes, or smaller ones, or have some items so big that they can’t be stored inside any container: you can still treat all of these cases as conceptual boxes, number and label them, give them URLs.QR codes used for boxes of archaeological items in Gortyna
There are limitations in the numbering/labelling system described above. The worst limitation is that in the same building (sometimes on the same shelf) there are boxes from other excavation projects that don’t follow this system at all, and either have a separate numbering sequence or no numbering at all, hence the “namespacing” of labels with the GQB prefix, so that the box is effectively called GQB 138 and not 138. I think an efficient numbering system would be one that is applied at least to the scale of one storage building, but why stop there?
Turning back to the initial question, what kind of numbering should we use? When I started working at the Soprintendenza in Liguria, I was faced with the result of no less than 70 years of work, first in Ventimiglia and then in Genoa. In Ventimiglia, each excavation area got its “namespace” (like T for the Roman theater) and then a sequential numbering of finds (leading to items identified as T56789) but a single continuous sequential sequence for the numbering of boxes in the main storage building. A second, newer building was unfortunately assigned a separate sequence starting again from 1 (and insufficient namespacing). In Genoa, I found almost no numbering at all, despite (or perhaps, because of) the huge number of unrelated excavations that contributed to a massive amount of boxes. Across the region, there are some 50 other buildings, large and small, with boxes that should be recorded and accounted for by the Soprintendenza (especially since most archaeological finds are State property in Italy). Some buildings have a numbering sequence, most have paper registries and nothing else. A sequential numbering sequence seems transparent (and allows some neat tricks like the German tanks problem), since you could potentially have an ordered list and look up each number manually, which you can’t do easily with a random number. You also get the impression of being able to track gaps in a sequence (yes, I do look for gaps in numeric sequences all the time), thus spotting any missing item. Unfortunately, I have been bitten too many times by sequential numbers that turned out to have horrible bis suffixes, or that were only applied to “standard” boxes leaving out oversized items.
On the other hand, the advantages of random numbering seem to increase linearly with the number of separate facilities ‒ I could replace random with non-transparent to better explain the concept. A good way to look at the problem is perhaps to ask whether numbering boxes is done as part of a bookkeeping activity that has its roots in paper registries, or it is functional to the logistics of managing cultural heritage items in a modern and efficient way.
Logistics. Do FedEx, UPS, Amazon employees care what number sequence they use to track items? Does the cashier at the supermarket care whether the EAN barcode on your shopping items is sequential? I don’t know, but I do know that they have a very efficient system in place, in which human operators are never required to actually read numerical IDs (but humans are still capable of checking whether the number on the screen is the same as the one printed on the label). There are many types of barcode used to track items, both 1D and 2D, all with their pros and cons. I also know of some successful experiments with RFID for archaeological storage boxes (in the beautiful depots at Ostia, for example), that can record numbers up to 38 digits.
Based on all the reflections of the past years, my idea for a region- or state-wide numbering+labeling system is as follows (in RFC-style wording):
- it MUST use a barcode as the primary means of reading the numerical ID from the box label
- the label MUST contain both the barcode and the barcode content as human-readable text
- it SHOULD use a random numeric sequence
- it MUST use a fixed-length string of numbers
- it MUST avoid the use of any suffixes like a, b, bis
In practice, I would like to use UUID4 together with a barcode.
A UUID4 looks like this: 1b08bcde-830f-4afd-bdef-18ba918a1b32. It is the UUID version of a random number, it can be generated rather easily, works well with barcodes and has a collision probability that is compatible with the scale I’m concerned with ‒ incidentally I think it’s lower than the probability of human error in assigning a number or writing it down with a pencil or a keyboard. The label will contain the UUID string as text, and the barcode. There will be no explicit URL in the barcode, and any direct link to a data management system will be handled by the same application used to read the barcode (that is, a mobile app with an embedded barcode reader). The data management system will use UUID as part of the URL associated with each box. You can prepare labels beforehand and apply them to boxes afterwards, recording all the UUIDs as you attach the labels to the boxes. It doesn’t sound straightforward, but in practice it is.
And since we’re deep down the rabbit hole, why stop at the boxes? Let’s recall some of the issues that I described non-linearly above:
- the content of boxes is not immutable: one day item X is in box Y, the next day it gets moved to box Z
- the location of boxes is not immutable: one day box Y is in room A of building B, the next day it gets moved to room C of building D
- both #1 and #2 can and will occur in bulk, not only as discrete events
The same UUIDs can be applied in both directions in order to describe the location of each item in a large bottom-up tree structure (add as many levels as you see fit, such as shelf rows and columns):item X → box Y → shelf Z → room A → building B
or:b68e3e61-e0e7-45eb-882d-d98b4c28ff31 → 3ef5237e-f837-4266-9d85-e08d0a9f4751 3ef5237e-f837-4266-9d85-e08d0a9f4751 → 77372e8c-936f-42cf-ac95-beafb84de0a4 77372e8c-936f-42cf-ac95-beafb84de0a4 → e895f660-3ddf-49dd-90ca-e390e5e8d41c e895f660-3ddf-49dd-90ca-e390e5e8d41c → 9507dc46-8569-43f0-b194-42601eb0b323
Now imagine adding a second item W to the same box: since the data for item Y was complete, one just needs to fill one container relationship:b67a3427-b5ef-4f79-b837-34adf389834f → 3ef5237e-f837-4266-9d85-e08d0a9f4751
and since we would have already built our hypothetical data management system, this data is filled into the system just by scanning two barcodes on a mobile device that will sync as soon as a connection is available. Moving one box to another shelf is again a single operation, despite many items actually moved, because the leaves and branches of the data tree are naïve and only know about their parents and children, but know nothing about grandparents and siblings.
There are a few more technical details about data structures needed to have a decent proof of concept, but I already wrote down too many words that are tangential to the initial question of how to number boxes.
En este penúltimo post del curso para aprender la bases de los Sistemas de Información Geográfica mediante ejercicios prácticos con datos de Juego de Tronos vamos a trabajar con el documento “Mapa”.
Un documento Mapa es un conjunto de elementos de diseño de un mapa o plano, organizados en una página virtual y cuyo objetivo es su salida gráfica (impresión o exportación a PDF). Lo que se ve en el diseño es lo que se obtiene al imprimir o exportar el mapa al mismo tamaño de página definido. En un Mapa se pueden insertar dos tipos de elementos: Elementos cartográficos y de diseño.
En nuestro caso vamos a crear un mapa con la ruta seguida por los hermanos Greyjoy y dibujada en el post sobre “Edición gráfica”.
Una vez tenemos abierto nuestro proyecto en gvSIG, lo primero que haremos es ir a la ventana de “Gestor de proyecto”. Una forma rápida de hacerlo es mediante el menú “Mostrar/Gestor de proyecto”. Seleccionamos el tipo de documento “Mapa” y pulsamos el botón de nuevo. Se nos abrirá una nueva ventana donde definiremos la características de la página de Mapa.
En nuestro caso seleccionaremos un “Tamaño de página” de “A4”, con “Orientación” “Horizontal” y le indicaremos que utilice la Vista donde tenemos nuestras capas cargadas en lugar de “Crear nueva Vista”. Si tenéis más de una Vista en vuestro proyecto, aparecerá un listado con todas ellas.
Pulsando sobre los “cuadrados negros” que aparecen en las esquinas y puntos medios del rectángulo que define la extensión de la Vista podemos cambiar su tamaño. De este modo vamos definiendo nuestro diseño del mapa. Haciendo clic sobre el elemento Vista insertado y arrastrando podemos desplazarlo. En nuestro caso redimensionamos la Vista insertada y la desplazamos, pasando a continuación a añadir otros elementos cartográficos.
La mayoría de los elementos cartográficos están íntimamente ligados a un documento Vista, de modo que al realizar cambios en la Vista, pueden verse reflejados en el mapa (cambios de zoom, desplazamientos, modificación de leyendas, organización de capas, etc.). Estas herramientas están disponibles desde el menú “Mapa/Insertar“ y en la barra de botones correspondiente.
La leyenda siempre se asocia con una Vista insertada en el Mapa y permite representar la simbología de las distintas capas de esa Vista. Una vez seleccionada la herramienta, se indicará el primer extremo del rectángulo que define el espacio a ocupar por la leyenda haciendo clic sobre el área de Mapa en el lugar deseado, y arrastrando hasta soltar en el extremo opuesto. Se mostrará un cuadro de diálogo en el que puede definir las propiedades gráficas de la leyenda insertada:
Una vez seleccionada la herramienta, se indicará el primer extremo del rectángulo que define el espacio a ocupar por el símbolo de norte haciendo clic sobre el área de Mapa en el lugar deseado, y arrastrando hasta soltar en el extremo opuesto. Se mostrará un cuadro de diálogo en el que puede definir las propiedades gráficas del norte insertado:
Para finalizar insertaremos un título con la herramienta de “Insertar texto” (en el menú Mapa/Insertar/Texto o en su botón correspondiente). El funcionamiento es similar al de los otros elementos, y en este caso lo que indicaremos es el texto que queremos que aparezca: “Greyjoy Brothers”.
A partir de aquí y por no alargar demasiado el post os encomendamos a que reviséis la documentación relacionada con el documento Mapa y que vayáis probando a insertar escalas gráficas, cajetines, etc. así como a probar las herramientas de ayuda al dibujo…con práctica os pueden quedar mapas realmente bien diseñados.
Ya podéis enviar vuestro archivo PDF a todos vuestros contactos.
Como suelen decir, la práctica hace al maestro…así que ya sabéis.
Queda un post para despedir el curso…no os lo perdáis.
Filed under: gvSIG Desktop, spanish Tagged: Exportar a PDF, Juego de tronos, mapa
On February 25, 2013, Christie Clark mounted the stage at the “International LNG Conference” in Vancouver to trumpet her government’s plans to have “at least one LNG pipeline and terminal in operation in Kitimat by 2015 and three in operation by 2020”.
Notwithstanding the Premier’s desire to frame economic devopment as a triumph of will, and notwithstanding the generous firehosing of subsidies and taxbreaks on the still nascent sector, the number of LNG pipelines and terminals in operation in Kitimat remains stubbornly zero. The markets are unlikely to relent in time to make the 2020 deadline.
And about that “conference”?
Like the faux “Bollywood Awards” that the government paid $10M to stage just weeks before the 2013 election, the “LNG in BC” conference was a government organized “event” put on primarily to advance the pre-election public relations agenda of the BC Liberal party.
In case anyone had any doubts about the purpose of the “event”, at the 2014 edition an exhibitor helpfully handed out a brochure to attendees, featuring an election night picture of the Premier and her son, under the title “We Won”.
The “LNG in BC” conference continued to be organized by the government for two more years, providing a stage each year for the Premier and multiple Ministers to broadcast their message.
The government is no longer organizing an annual LNG confab, despite their protestations that the industry remains a key priority. At this point, it would generate more public embarassment than public plaudits.
Instead, we have a new faux “conference”, slated to run March 14-15, just four weeks before the 2017 election begins: the #BCTech Summit.
Like “LNG in BC”, the “BCTech Summit” is a government-organized and government-funded faux industry event, put on primarily to provide an expensive backdrop for BC Liberal politicking.
The BC Innovation Council (BCIC) that is co-hosting the event is itself a government-funded advisory council run by BC Liberal appointees, many of whom are also party donors. To fund the inaugural 2016 version of the event, the Ministry of Citizens Services wrote a direct award $1,000,000 contract to the BCIC.
The pre-election timing is not coincidental, it is part of a plan that dates all the way back to early 2015, when Deputy Minister Athana Mentzelopoulos directed staff to begin planning a “Tech Summit” for spring of the following year.
“We will not be coupling the tech summit with the LNG conference. Instead, the desire is to plan for the tech summit annually over the next couple of years – first in January 2016 and then in January 2017.” – email from A. Mentzelopoulos, April 8, 2015
The intent of creating a “conference” to sell a new-and-improved government “jobs plan”, and the source of that plan, was made clear by the government manager tasked with delivering the event.
“The push for this as an annual conference has come from the Premier’s Office and they want to (i) show alignment with the Jobs Plan (including the LNG conference) and (ii) show this has multi-ministry buy-in and participation.” – S. Butterworth, April 24, 2015
The event was not something industry wanted. It was not even something the BCIC wanted. It was something the Premier’s Office wanted.
And so they got it: everyone pulled together, the conference was put on, and it made a $1,000,000 loss which was dutifully covered by the Province via the BC Innovation Council, laying the groundwork for 2017’s much more politically potent version.
This year’s event will be held weeks before the next election. It too will be subsidized heavily by the government. And as with the LNG conference, exhibitors and sponsors will plunk down some money to show their loyalty to the party of power.
The platinum sponsors of LNG in BC 2015 were almost all major LNG project proponents: LNG Canada, Pacific Northwest LNG, and Kitimat LNG. Were they, like sponsors at a normal trade conference, seeking to raise their profile among attendees? Or were they demonstrating their loyalty to the government that organized the event and then approached them for sponsorship dollars?
It is hard to avoid the conclusion that these events are just another conduit for cash in our “wild west” political culture, a culture that shows many of the signs of “systematic corruption” described by economist John Wallis in 2004.
“In polities plagued with systematic corruption, a group of politicians deliberately create rents by limiting entry into valuable economic activities, through grants of monopoly, restrictive corporate charters, tariffs, quotas, regulations, and the like. These rents bind the interests of the recipients to the politicians who create them.”
Systematically corrupt governments aren’t interested in personally enriching their members, they are interested in retaining and reinforcing their power, through a virtuous cycle of favours: economic favours are handed to compliant economic actors who in turn do what they can to protect and promote their government patrons.
The 2017 #BCTech conference already has a title sponsor: Microsoft. In unrelated news, Microsoft is currently negotiating to bring their Office 365 product into the BC public sector. If the #BCTech conference was an ordinary trade show, these two unrelated facts wouldn’t be cause for concern. But because the event is an artificially created artifact of the Premier’s Office, a shadow is cast over the whole enterprise.
Who is helping who here, and why?
A recent article in Macleans included a telling quote from an anonymous BC lobbyist:
If your client doesn’t donate, it puts you at a competitive disadvantage, he adds. It’s a small province, after all; the Liberals know exactly who is funding them, the lobbyist notes, magnifying the role donors play and the access they receive in return.
As long as BC remains effectively a one-party state, the cycle of favors and reciprocation will continue. Any business subject to the regulatory or purchasing decisions of government would be foolish not to hedge their bets with a few well-placed dollars in the pocket of BC’s natural governing party.
The cycle is systematic and self-reinforcing, and the only way to break the cycle, is to break the cycle.
During the 12th International gvSIG Conference there was a workshop about “Geopaparazzi and gvSIG” in English.
At this workshop, attendees could learn about how to install Geopaparazzi, load layers, enter data, export data to gvSIG Desktop…
The workshop was recorded, so you can follow it now if you couldn’t attend it.
Here you have the video:
The data used at the workshop are available here.
And the gvSIG plugins for Geopaparazzi here.
with some instructions for the installation.
Finally, you can take a look at this interesting presentation about the state of the art of Geopaparazzi and its integration in gvSIG given at the 12th International gvSIG Conference, State of the art of Geopaparazzi: towards gvSIG Mobile:
Filed under: english, events, Geopaparazzi, gvSIG Desktop, technical collaborations, training
We apologize in advance, but this post is for our italian readers (hence in Italian only) to announce that we have finalized the implementation of the DCAT-AP_IT Metadata Profile leveraging on the CKAN Open Data product.
Siamo lieti di annunciare il primo rilascio dell’estensione CKAN per il supporto al profilo applicativo DCAT-AP_IT nei portali open data italiani. Lo sviluppo è stato sostenuto, in uno sforzo congiunto, dalla Provincia di Bolzano/Sud Tirol e dalla Provincia di Trento ed è disponibile gratuitamente con licenza AGPL v3.0.
Il profilo per la documentazione dei dati delle pubbliche amministrazioni (DCAT-AP_IT), reso disponibile dall’Agenzia per l’Italia Digitale (AgID), nasce con l’obiettivo di armonizzare i metadati con cui vengono descritti i dataset pubblici, al fine di migliorarne la qualità e favorire il riuso delle informazioni. L’estensione ckanext-dcatapit, sviluppata da GeoSolutions, è disponibile su una respository dedicata sotto il nostro account GitHub con un’accurata documentazione che aiuta a comprenderne le caratteristiche, i requisiti e le specifiche di installazione e configurazione (presto la repository sarà portata sotto l'egida di AgID). Le pubbliche amministrazioni italiane potranno quindi utilizzare l’estensione per rendere i propri cataloghi conformi al profilo italiano DCAT-AP_IT e favorire le pratiche di condivisione e standardizzazione con le altre PA del territorio italiano.[caption id="attachment_3307" align="alignnone" width="515"] Scheda di visualizzazione del dataset[/caption]
Grazie alla notevole esperienza acquisita dal team di sviluppo di GeoSolutions nella realizzazione di numerose estensioni per CKAN, nonché nell’installazione e configurazione di molteplici cataloghi che utilizzano questa piattaforma, l’estensione ckanext-dcatapit fornisce un insieme valido ed eterogeneo di funzionalità non solo per la creazione guidata di datasets, ma anche per l’integrazione di metadati provenienti da sorgenti esterne (CSW, RDF, JSON-LD) in conformità al Profilo Applicativo.[caption id="attachment_3308" align="alignnone" width="515"] Form di modifica del dataset[/caption]
L’estensione ckanext-dcatapit è stata sviluppata con scrupolosa attenzione non solo in merito alle sue funzionalità caratteristiche e alla loro stabilità, ma anche per garantire la più alta compatibilità possibile con le altre estensioni spesso presenti nelle piattaforme CKAN installate. In aggiunta favorisce l’integrazione con eventuali estensioni custom che necessitano di definire campi aggiuntivi ai dataset. Anche gli aspetti legati al multilinguismo e la localizzazione dell’interfaccia sono stati affrontati e resi disponibili per garantire la massima usabilità da parte di quelle realtà che li necessitano, come per esempio le Provincie di Bolzano/Sud Tirol e Trento: l’estensione fornisce i propri files di localizzazione che aiutano a snellire eventuali personalizzazioni in questi termini, mentre l’estensione ckanext-multilang fornisce supporto per il multilinguismo dei contenuti presenti nel catalogo (dataset, organizzazioni, gruppi e altro).[caption id="attachment_3309" align="alignnone" width="515"] Multilinguismo al lavoro[/caption] Di seguito un elenco delle PA che già usano ckanext-dcatapit:
- Il portale OpenData della Provincia di Bolzano/Sud Tirol.
- Il portale OpenData del Trentino.
- L’infrastruttura federata OpenDataNetwork (per il momento ancora in test) per il capofila Città Metropolitana di Firenze, che raccoglie e distribuisce i dati di vari enti toscani tra cui: Città Metropolitana di Firenze, Provincia di Prato, Provincia di Pistoia ed Autorità di Bacino dell’Arno.
gvSIG Team: gvSIG is nominated at the maximum category of the “Sharing & Reuse Awards” given by the European Commission
As the tithe of this post says, the career of the gvSIG Project has been recognised by the European Commission with the nomination to the maximum category of the “Sharing & Reuse Awards”.
The General Manager for Communication and Information Technologies, Vicente Aguiló, has announced that “the gvSIG project, born at the Generalitat, has been selected by the European Commission as finalist for the “Best open source software solution at the cross border category” at the first edition of the Sharing & Reuse Awards.
The project will compete at the category with the highest impact, the international one, together with other three finalist proposals. A total of 17 projects have been nominated by the Commission finally, after evaluating 118 proposals, from different places around the European Union, for the cross border, national, regional and local categories.
The community executive branch has created the Sharing & Reuse Awards in recognition of the modernization of the Public Administrations in Europe, through the development of electronic Government solutions that can be reused by other organizations, thanks to the open source software.
The final results will be announced in March 29th 2017 in Lisbon, in the Sharing & Reuse Conference 2017, the slogan of which is “Solving the European IT puzzle together”. Apart from the results, being among the 4 nominated projects is a recognition to the gvSIG project and everything that has been built around it.
The event, in its first edition, will meet the experts in open source software and public administration international community, as well as representatives of the European institutions, to debate about the advantages to share and reuse IT solutions in the public sector.
The Communication and Information Technologies General Management (DGTIC) will present the gvSIG project at the conference in Lisbon, together with the other finalist proposals, coming from Germany, Austria, Belgium, Spain, Finland, France, Greece, the Netherlands and Czech Republic.
The Communication and Information Technologies General Manager of the Generalitat has highlighted that “this nomination is a recognition for the career of a project that has been developed in and out of the Administration and that has converted the Valencian Community in an international mentor for geolocation through open source software use”.
Vicente Aguiló has remembered that “the project was born at the Generalitat to create a geographic information system based on open source code, and once launched it was released to an international community of developers that form the gvSIG Association at this moment, and that we become part of it”.
You can consult all the information about these awards and nominated projects in each category in:
See you in Lisbon!
Filed under: english, gvSIG Desktop, premios, press office, software libre Tagged: European Comission
Desde meados de 2016, por um convite recebido do Jody Garnett, estou escrevendo no Blog do GeoServer (em inglês). Em meu segundo post, resolvi relatar um pouco sobre os 10 anos que a comunidade GeoServer-BR está comemorando neste ano de 2017.
Tenho imenso orgulho de saber que tudo começou em 2007, através do curso que ministrei no III ENUM (Encontro Nacional de Usuários MapServer) em Brasília/DF.
Com o passar dos anos, é muito gratificante ver como o GeoServer está difundido no Brasil e como tem sido amplamente utilizado pelas empresas de todos os setores e portes, e também pelos órgãos governamentais que o adotaram como servidor de mapas oficial da INDE (Infraestrutura Nacional de Dados Espaciais).
Gostaria de agradecer a Boundless, que é a empresa mantenedora do GeoServer, além de todos que contribuíram de alguma forma nesses 10 anos para o crescimento e divulgação do GeoServer no Brasil.Posts RelacionadosSovrn
gvSIG Team: gvSIG nominado a la máxima categoría de los premios “Sharing & Reuse Awards” que otorga la Comisión Europea
Como dice el titular de este post, la trayectoria del proyecto gvSIG ha sido reconocida por la Comisión Europa con la nominación a la máxima categoría de los premios denominados “Sharing & Reuse Awards”.
El director general de Tecnologías de la Información y las Comunicaciones, Vicente Aguiló, ha anunciado hoy que “el proyecto gvSIG de la Generalitat ha sido seleccionado por la Comisión Europea como finalista para competir por el premio Mejor Solución Transfronteriza de software libre”, en la primera edición de los Sharing & Reuse Awards.
El proyecto gvSIG competirá en la categoría de mayor proyección, la internacional, junto a otras tres propuestas finalistas. Un total de 17 proyectos han sido finalmente nominados por la Comisión, tras haber evaluado 118 propuestas, provenientes de todos los rincones de la Unión Europea, para las categorías de internacional o transfronteriza, nacional, regional y local.
El Ejecutivo comunitario ha creado los premios Sharing & Reuse Awards en reconocimiento a la modernización de las Administraciones Públicas en Europa, mediante el desarrollo de soluciones de Gobierno electrónico que puedan ser reutilizadas por otras organizaciones, gracias al uso de software libre.
El resultado final se anunciará el próximo 29 de marzo en Lisboa, en el marco de la Sharing & Reuse Conference 2017, que lleva por lema “Resolviendo juntos el rompecabezas TI europeo”. Al margen del resultado, estar ya entre los 4 proyectos nominados es todo un reconocimiento al proyecto gvSIG y a todo lo que ha construido a su alrededor.
El evento, en su primera edición, reunirá a la comunidad internacional de expertos en software libre y administración pública, así como representantes de las instituciones europeas, para debatir sobre las ventajas de compartir y reutilizar soluciones informáticas en el sector público.
La Dirección General de Tecnologías de la Información y las Comunicaciones (DGTIC) presentará el proyecto gvSIG en la conferencia de Lisboa, junto al resto de propuestas finalistas, procedentes de Alemania, Austria, Bélgica, España, Finlandia, Francia, Grecia, Países Bajos y República Checa.
El titular de Tecnologías de la Información y la Comunicación de la Generalitat ha destacado que “esta nominación supone el reconocimiento a la trayectoria de un proyecto, que se ha desarrollado dentro y fuera de la Administración y que ha convertido a la Comunitat Valenciana en un referente internacional de la geolocalización mediante el uso de software libre”.
Vicente Aguiló ha recordado que “el proyecto se gestó en la Generalitat con la idea de crear un sistema de información geográfica basado en código abierto y, una vez puesto en marcha, fue liberado en manos de una comunidad internacional de desarrolladores que ahora conforman la Asociación gvSIG y de la que formamos parte”.
Podéis consultar toda la información sobre estos premios y proyectos nominados en cada una de las categorías en:
¡Nos vemos en Lisboa!
Filed under: gvSIG Desktop, premios, press office, software libre, spanish Tagged: comisión europea
Consequently, all the plugins should migrate to Python 3 and QT 5, and adapt to the API breaks, in order to work in QGIS 3.
I have update of the Semi-Automatic Classification Plugin (SCP) to version 5.99 which runs in QGIS 3 (but not in QGIS 2). The tools of this version are the same as SCP version 5, but all the functions run with Python 3, QT 5, and are adapted to the new QGIS APIs.
SCP 5.99 running in QGIS 2.99
NEW! Read the first test of the TIN capabilities of SAGA GIS.
People have already helped on Twitter, and I'll include some of these suggestions in this post.
Help! Are there any good #opensource solutions to turn a DEM into a TIN? #gistribe #foss4g #gis #3d #postgis pic.twitter.com/YWV59HtNgW— Bjørn Sandvik (@thematicmapping) September 6, 2016
My example DEM of Jotunheimen in Norway can be downloaded here (144 MB GeoTIFF). This is the same dataset I've used previously for my terrain mapping experiments with three.js and Cesium.
The goal now is to turn this raster DEM into a nice triangulated irregular network (TIN) optimised for 3D rendering.
The dream solution would be a command line tool (part of GDAL?) that can turn a raster DEM into an optimised TIN.
Open source candidates:
- SAGA GIS
- GRASS GIS
- Point Cloud Library (PCL)
After hosting a successful GeoPython conference in 2016, the 2017 edition of the GeoPython Conference will take place from May 8 to 10 in Basel/Muttenz, Switzerland.