OSGeo Planet

Syndicate content
Planet OSGeo - http://planet.osgeo.org
Updated: 5 min 6 sec ago

Just van den Broecke: Emit #3 – Things are Moving

Fri, 2018-02-16 23:03

This is Emit #3, in a series of blog-posts around the Smart Emission Platform, an Open Source software component framework that facilitates the acquisition, processing and (OGC web-API) unlocking of spatiotemporal sensor-data, mainly for Air Quality. In Emit #1, the big picture of the platform was sketched. Subsequent Emits will detail technical aspects of the SE Platform. “Follow the data” will be the main trail loosely adhered to.

Three weeks ago since Emit #2. A lot of Things have been happening since:

A lot to expand on. Will try to briefly summarize on The Things Conference, LoRA and LoRaWAN and save the other tech for later Emits.

LoRA and LoRaWAN may sound like a sidestep, but are very much related to for example building a network of Air Quality and other environmental sensors. When deploying such sensors two issues always arise:

  • power: need continuous electricity to keep sensors and their computing boards powered
  • transmission: need cabled Internet, WIFI or cellular data to transmit sensor-data

In short, LoRa/LoRaWAN (LoRa=Long Range) is basically a wireless RF technology for long-range, low-power and low-throughput communications. You may find many references on the web like from the LoRa Alliance and SemTech. There is lots of buzz around LoRa. But just like the, Wireless Leiden project , who built a public WIFI network around the city, The ThingsNetwork has embraced LoRa technology to build a world-wide open, community-driven, “overlay network”:

“The Things Network is building a network for the Internet of Things by creating abundant data connectivity, so applications and businesses can flourish. The technology we use is called LoRaWAN and it allows for things to talk to the internet without 3G or WiFi. So no WiFi codes and no mobile subscriptions. It features low battery usage, long range and low bandwidth. Perfect for the internet of things.”

You may want to explore the worldwide map of TTN gateways below.

And The ThingsNetwork (TTN) was established in Amsterdam, The Netherlands. As an individual you can extend The Things Network by deploying a Gateway. Via the TTN KickStarter project, I was one of the first backers, already in 2015. The interest was overwhelming, even leading to (Gateway) delivery problems. But a LoRa Gateway to extend TTN is almost a commodity now. You can even build one yourself. TTN is very much tied to the whole “DIY makers movement”. All TTN designs and code (on GitHub) are open. Below a global architecture picture from their site.

 So TTN organized their first conference, off course in Amsterdam. For three days: it was an amazing success, more than 500 enthousiasts.

The conf was very hands-on with lots of free workshops (with free takeaway hardware). Followed several workshops, which were intense (hardware+software hacking) but always rewarding (blinking green lights!). One to mention in particular (as a Python programmer) was on LoPy a sort of Arduino board, very low cost (around $30), programmable with MicroPython that connects directly to TTN. Within an hour the board was happily sending meteo-data to the TTN.

All in all this conference made me eager to explore more of LoRA with TTN, in particular to explore possibilities for citizen-based sensor-networks for environmental, in particular air quality, data. I am aware that “IoT” has some bad connotations when it comes to security, especially from closed technologies. But IoT is a movement we cannot stop. With and end-to-end open technology like the TTN there is at least the possibility to avoid the “black box”-part and take Things in our own hand.

 

 

 

Categories: OSGeo Planet

gvSIG Team: Recording of webinar on “gvSIG Suite: open source software for geographic information management in agriculture” is now available

Fri, 2018-02-16 09:33

If you weren’t be able to follow the webinar on “gvSIG Suite: open source software for geographic information management in agriculture”, organized by GODAN and gvSIG Association, you can watch the recording now at the gvSIG Youtube channel:

The webinar was oriented to show the gvSIG Suite, a complete catalog of open source software solutions, applied to agriculture.

The gvSIG Suite is composed of ‘horizontal’ products:

  • gvSIG Desktop: Geographic Information System for editing, 3D analysis, geoprocessing, maps, etc
  • gvSIG Online: Integrated platform for Spatial Data Infrastructure (SDI) implementation.
  • gvSIG Mobile: Mobile application for Android to take field data.

and sector products:

  • gvSIG Roads: Platform to manage roads inventory and conservation.
  • gvSIG Educa: gvSIG adapted to geography learning in pre-university education.
  • gvSIG Crime: Geographic Information System for Criminology management.

At the webinar we also showed several successful case studies in agriculture and forestry. You also can consult another case studies at gvSIG Outreach website in this sector and other related sectors (they are in their original language but there’s a translator at the left side):

The presentation is available at this link.

If you want to download gvSIG Desktop you can do it from the gvSIG website, gvSIG Mobile is available from the Play Store, and if you are interested in implementing gvSIG Online in your organization you can contact us by e-mail: info@gvsig.com.

If you have any doubt or problem with the application you can use the mailing lists.

And here you have several links about training on gvSIG, with free courses:

Categories: OSGeo Planet

gvSIG Team: GIS applied to Municipality Management: Module 12 ‘Geoprocessing’

Thu, 2018-02-15 08:52

The video of the twelfth module is now available, in which we will see the geoprocessing tools in gvSIG.

gvSIG has more than 350 geoprocesses, both for raster and vector layers, which allow us to perform different types of analysis, for example to obtain the optimal areas to locate a specific type of infrastructure.

Using the geoprocesses that are available in gvSIG we can create buffers for example, to calculate, among other things, the roads or railways rights of way. Then an intersection can be applied with a layer of parcels to obtain which part of each parcel should be expropriated. We can also make hydrological analysis, merge layers…

The cartography to follow this video can be downloaded from this link.

Here you have the videotutorial of this new module:

Related posts:

Categories: OSGeo Planet

GeoSolutions: Latest on GeoNode: Local Monitoring Dashboard

Wed, 2018-02-14 16:51

monitoring-geonode-featured

Dear All,

in this post we would like to introduce a plugin which we have developed for GeoNode and released it as Open Source (full documentation available here) in order to give users the ability to to keep under control hardware load as well as software load on a GeoNode installation. This plugin is called Monitoring, it is available as an Open Source contrib module for GeoNode, documentation on how to enable it and configure it, can be found here.

Overview of the monitoring capabilities

We are now going to provide an overview of the functionalities provided by this plugin; it is worth to point out that given the sensitive nature of the information to show, it is accessible only to GeoNode users that have the administrator role.

The plugin allows administrators to keep under control the hardware and software load on a GeoNode instance by collecting, aggregating, storing and indexing a large number of informations that we normally keep hidden and spread in various logs which are difficult to find when troubleshooting, like GeoNode's own log, GeoServer log and audit log and so on; in addition, we collect also information about hardware load on memory and cpu (disk could be added easily) which are important to collect in live instances.

Possibility to create alerts that can control certain conditions and then send a notification email to preconfigured email addresses is also available (more on this here).

It is also possible to look at OGC Service statistics on a per service and per layer basis. Eventually, a simple country-based map that shows where requests are coming from is available.

Overview of the available analytics

Let us now dive a little into the functionalities provided by this plugin. Here below you can see the initial homepage of the plugin.

[caption id="attachment_3878" align="alignnone" width="1024"]Monitoring Plugin Homepage Monitoring Plugin Homepage[/caption] We tried to put on the homepage a summary of the available information so that a user can quickly understand what is going on. The first row provides a series of visual controls that give an overview of the instance's health at different aggregation time ranges (from 10 mins to 1 Week):
  • Health Check - tells us if there are any alerts or erros that would require the attention of the administrator. Colors range from Red (at least an Error has happened in the selected time range) to Yellow (no Errors but at least an Alert has triggered within the selected time range)  and finally to Green (no Alerts or Errors).
  • Uptime - shows GeoNode system uptime.
  • Alerts -  shows number of notifications from defined checks. When clicked, Alerts box will show detailed information . See Notifications description for details.
  • Errors - shows how many errors were captured during request processing. When clicked, Errors box will show detailed list of captured errors.
The Software Performance section is responsible for showing analytics about the overall performance of both GeoNode itself as well as the OGC back-end. [caption id="attachment_3892" align="alignnone" width="400"]dashboard-sw-performance Software Performance Summary Dashboard[/caption]

If we click on the upper right corner icon, the detailed view will be shown, as illustrated below. Additional detailed information over the selected time period will be shown for both GeoNode, OGC Services and then for individual layers and resources.

[caption id="attachment_3906" align="alignnone" width="1024"]Software Performance Detailed Dashboard - 1 Software Performance Detailed Dashboard - 1[/caption] [caption id="attachment_3907" align="alignnone" width="1024"]Software Performance Detailed Dashboard - 2 Software Performance Detailed Dashboard - 2[/caption]

The Hardware Performance section is responsible for showing analytics about CPU and Memory usage of the machine where GeoNode runs as well as of the machine where GeoServer runs (in case it runs on a separate machine); see figure below.

[caption id="attachment_3901" align="alignnone" width="1024"]Hardware Performance Detail Section Hardware Performance Detail Section[/caption]

Interesting points and next steps

The plugin provides additional functionalities which are described in detail in the GeoNode documentation (see this page); as mentioned above we can inspect errors in the logs directly from the plugin, we can set alerts that would send notifications email when they trigger. Moreover, the plugin provides a few additional endpoints that make it easier to monitor a GeoNode instance from the GeoHealthCheck Open Source project (as explained here).

[caption id="attachment_3912" align="alignnone" width="1024"]Inspecting the error log Inspecting the error log[/caption]

Last but not least, we would want to thank the GFDRR group at the World Bank which provided most of the funding for this work.

If you are interested in learning about how we can help you achieving your goals with open source products like GeoServerMapStoreGeoNode and GeoNetwork through our Enterprise Support Services and GeoServer Deployment Warranty offerings, feel free to contact us!

The GeoSolutions team,
Categories: OSGeo Planet

CARTO Inside Blog: CARTO Core Team and 5x

Wed, 2018-02-14 10:00

CARTO is open source, and is built on core open source components, but historically most of the code we have written has been in the “CARTO” parts, and we have used the core components “as is” from the open source community.

While this has worked well in the past, we want to increase the velocity at which we improve our core infrastructure, and that means getting intimate with the core projects: PostGIS, Mapnik, PostgreSQL, Leaflet, MapBox GL and others.

Our new core technology team is charged with being the in-house experts on the key components, and the first problem they have tackled is squeezing more performance out of the core technology for our key use cases. We called the project “5x” as an aspirational goal – can we get multiples of performance improvement from our stack? We knew “5x” was going to be a challenge, but by trying to get some percentage improvement from each step along the way, we hoped to at least get a respectable improvement in global performance.

Our Time Budget

A typical CARTO visualization might consist of a map and a couple widget elements.

The map will be composed of perhaps 12 (visible) tiles, which the browser will download in parallel, 3 or 4 at a time. In order to get a completed visualization delivered in under 2 seconds, that implies the tiles need to be delivered in under 0.5s and the widgets in no more than 1s.

Ideally, everything should be faster, so that more load can be stacked onto the servers without affecting overall performance.

The time budget for a tile can be broken down even further:

  • database retrieval time,
  • data transit to map renderer,
  • map render time,
  • map image compression, and
  • image transit to browser.

The time budget for a widget is basically all on the database:

  • database query execution, and
  • data transit to JavaScript widget.

The project goal was to add incremental improvements to as many slices as possible, which would hopefully together add up to a meaningful difference.

Measure Twice, Cut Once

In order to continuously improve the core components, we needed to monitor how changes affected the overall system, against both a long-term baseline (for project-level measurements) and short-term baselines (for patch-level measurements).

To get those measurements, we:

  • Enhanced the metrics support in Mapnik, so we could measure the amount of time spent in retrieving data, rendering data, and compressing output.
  • Built an internal performance harness, so we can measure the cost of various workloads end-to-end.
  • Carried out micro-benchmarks of particular workloads at the component level. For PostGIS, that meant running particular SQL against sample data. For Mapnik that meant running particular kinds of data (large collections of points, or lines, or polygons) through the renderer with various stylings.

Using the measurements as a guide, we then attacked the performance problem.

Low Hanging Fruit

Profiling and running performance tests, and doing a little bit of research showed up three major opportunities for performance improvements:

  • PostgreSQL parallelism was the biggest potential win. With version 10 coming out shortly, we had an opportunity get “free” improvements “just” by ensuring all the code in CARTO was parallel safe and marked as such. Reviewing all the code for parallel safety also surfaced a number of other potential efficiency improvements.
  • Mapnik turned out to have a couple areas where performance could be improved, through caching features rather than re-querying, and in improving the algorithms used for rendering large collections of points.
  • PostGIS had some small bottlenecks in the critical path for CARTO rendering, including some inefficient memory handling in TWKB that impacted point encoding performance.

Most importantly, during our work on core code improvements, we brought all the core software into the CARTO build and deployment chain, so these and future improvements can be quickly deployed to production without manual intervention.

We want to bring our improvements back to the community versions, and at the same time have early access to them in the CARTO infrastructure, so we follow a policy of contributing improvements to the community development versions while back-patching them into our own local branches (PostGIS, Mapnik, PostgreSQL).

And In the End

Did we get to “5x”? No, in our end-to-end benchmarks, we notched a range of different improvements, ranging from a new percent to a few times, depending on the use cases. We also found our integration benchmarks were sensitive to pressure from other load on our testing servers, so relied mostly on micro-benchmarks of different components to confirm local performance improvements.

While the performance improvements have been gratifying, some of the biggest wins have been the little improvements we made along the way:

We made a lot of performance improvements across all the major projects in which CARTO is based upon: you may have already noticed those improvements. We’ve also shown that optimizations can only get you that so far – sometimes taking a whole new approach is a better plan. A good example of this is the vector and raster data aggregations work we have been doing, reducing the amount of data transfered with clever summarization and advanced styling.

More changes from our team are still rolling out, and you can expect further platform improvements as time goes on. Keep on mapping!

Categories: OSGeo Planet

GeoNode: GeoNode Summit 2018

Wed, 2018-02-14 00:00
GeoNode Summit 2018 Join the awesome GeoNode community for the Summit 2018 from 26 to 28 March 2018 in the elegant city of Turin, Italy!

Summit Website


This Website is licensed CC by SA 2012. - GeoNode Contributors. Designed by Spatial Dev

Categories: OSGeo Planet

OSGeo.nl: Verslag: OSGeo.nl en OpenStreetMap NL Nieuwjaarsborrel 2018

Tue, 2018-02-13 22:33

Op zondag 14 januari 2018 werd de traditionele OSGeo.nl en OpenStreetMap NL nieuwjaarsborrel, in de inmiddels geheel gerenoveerde bovenzaal van Cafe Dudok in Hilversum, weer goed bezocht. Een van de weinige events waar deze twee communities tezamen komen (dat moeten we vaker doen!).

Ieder jaar weer blijkt dat er bij deze communities weer interesses en raakvlakken zijn: niet alleen op gebied van de open data rond De Basis Registraties (BAG, BGT, BRK, Top10NL etc) en projecten die zich daar mee bezig houden zoals NLExtract, maar ook op het gebied van bijvoorbeeld Missing Maps en QGIS. Dit smaakt ook naar meer events in 2018 om gezamenlijk Open Source en Open Data voor geo-informatie in Nederland te versterken.

Onder de aanwezigen was duidelijk veel kennis en vooral het enthousiasme deze kennis te delen. Naast dat de bitterballen en speciaal-bieren weer goed smaakten waren er weer een aantal presentaties, aankondigingen en plannen. Meer hieronder, met links naar de slides:

1. OSGeo.nl: Terugblik 2017, plannen 2018 – Just van den Broecke – Slides PDF
Gert-Jan van der Weijden nam na 5 jaar uitmuntend OSGeo.nl voorzitterschap afscheid. Het huidige OSGeo.nl bestuur bestaat nu uit Just van den Broecke (voorzitter), Paulo van Breugel (secretaris), Barend Köbben (penningmeester). Maar bovenal was 2017 het jaar waarin de eerste FOSS4G NL plaatsvond in Groningen. Door de inzet van een geweldig team met o.a. Erik Meerburg, Leon van der Meulen, Willy Bakker en vele vrijwilligers van de Rijks Universiteit Groningen, werd dit event een enorm succes. Meer (vervolg) daarover later. En vooruitkijkend: welke ambities hebben we in 2018: teveel om op te noemen: vooral een FOSS4G NL 2018, maar ook gezien bijvoorbeeld een overweldigend aantal inschrijvingen op onze GRASS Crash Course, willen we in 2018 meer inzetten op kleinschalige, gerichte, hands-on events. Laat ons weten, als je daarvoor ideeën hebt.

2. Raymond Nijssen – 510.global data team Rode Kruis – Slides PDF

Raymond nam ons mee naar Sint Maarten in zijn, vaak aangrijpende, persoonlijke verhaal als vrijwilliger voor het 510.global data team van het Rode Kruis, waarvoor hij zich had aangemeld  kort na de orkaan Irma. Met de beperkte middelen en connectiviteit aldaar, wisten Raymond en het team, gebruikmakend van het ecosysteem van OpenStreetMap en tools als QGIS effectief overzichtskaarten voor hulpverleners te fabriceren. Hulde Raymond, je bent een voorbeeld!

3. Rob van Loon, Ordina – Beheer GeoServer configuratie met Python scripts – Slides PDF


GeoServer wordt op zeer veel plekken in Nederland ingezet. De bijbehorendeGeoServer Web UI om lagen en styling in te regelen is handig. Maar in veel situaties is het geautomatiseerd configureren van GeoServer veel effectiever: denk aan OTAP straten, meerdere, soms 100-en bijna identieke, lagen. Veel herhaalde, handmatige handelingen. Er bestaat al jaren een niet heel bekende REST-API om GeoServer op afstand te configureren. Deze is in de laatste versies van GeoServer steeds stabieler en krachtiger. Rob heeft daarvoor een toolkit ontwikkeld, binnenkort onder https://github.com/borrob.

4. Willem Hofmans (JW van Aalst) – Witte Plekken en Zwarte Gaten in de BGT


Willem en bij afwezigheid, Jan-Willem van Aalst, presenteerde een ontdekkingsreis in de krochten en details van de BGT. Daarbij viel nog veel te ontdekken: Witte Plekken, waar BGT nog niet volledig is, presenteert Jan-Willem regelmatig via zijn website. Zwarte Gaten, waar Willem met name op inging, zijn ook spannend: zitten er fouten in de BGT, of in de tools die, de vaak over-gecompliceerde, GML van BGT inlezen zoals NLExtract? Grensgebieden tussen rechthoeken en curves, het werd een reis vol verassingen. Navolging binnen NLExtract is er al.

Verdere aankondigingen:

Erik Meerburg (met Hans van der Kwast): de eerste QGIS Gebruikersdag op 31 jan 2018 bij IHE in Delft. Heeft inmiddels plaatsgevonden: een overweldigend success: over de 100 deelnemers. Er komt een QGIS NL Gebruikersgroep. Meer daarover binnenkort, blijf ons hier volgen!

Erik Meerburg: de FOSS4G NL 2018: er waren op dat moment gesprekken met meerdere universiteiten/HBOs, want iedereen wil dit event graag binnenhalen. Inmiddels vergaande vorderingen: save the date: woensdag 11 juli 2018 bij Aeres Hogeschool in Almere. Meer nieuws volgt!

Edward Mac Gillavry: na eerder discussies op deze dag rond basisregistraties met name BGT en NLExtract en het streven van OSGeo.nl op kleinere, gerichte events/workshops/hackethons/code sprints te organiseren, biedt WebMapper, ook sponsor voor de OSGeo.nl Meetup, aan om in 2018 een NLExtract Dag te organiseren. Vorm, plaats, tijd, nog te bepalen, ook meer hierover hier en via de OSGeo.nl kanalen.

Al met al weer een mooie middag, waarbij het overgrote deel van de aanwezigen ook nog aanschoof bij het gezamenlijke diner in Cafe Dudok.

 

 

Categories: OSGeo Planet

Gary Sherman: Quick Guide to Getting Started with PyQGIS 3 on Windows

Tue, 2018-02-13 09:00

Getting started with Python and QGIS 3 can be a bit overwhelming. In this post we give you a quick start to get you up and running and maybe make your PyQGIS life a little easier.

There are likely many ways to setup a working PyQGIS development environment---this one works pretty well.

Contents

Requirements
  • OSGeo4W Advanced Install of QGIS
  • pip (for installing/managing Python packages)
  • pb_tool (cross-platform tool for compiling/deploying/distributing QGIS plugin)
  • A customized startup script to set the environment (pyqgis.cmd)
  • IDE (optional)
  • Emacs (just kidding)
  • Vim (just kidding)

We'll start with the installs.

Installing

Almost everything we need can be installed using the OSGeo4W installer available on the QGIS website.

OSGeo4W

From the QGIS website, download the appropriate network installer (32 or 64 bit) for QGIS 3.

  • Run the installer and choose the Advanced Install option
  • Install from Internet
  • Choose a directory for the install---I prefer a path without spaces such as C:\OSGeo4W
  • Accept default for local package directory and Start menu name
  • Tweak network connection option if needed on the Select Your Internet Connection screen
  • Accept default download site location
  • From the Select packages screen, select: Desktop -> qgis: QGIS Desktop

When you click Next a bunch of additional packages will be suggested---just accept them and continue the install.

Once complete you will have a functioning QGIS install along with the other parts we need. If you want to work with the nightly build of QGIS, choose Desktop -> qgis-dev instead.

If you installed QGIS using the standalone installer, the easiest option is to remove it and install from OSGeo4W. You can run both the standalone and OSGeo4W versions on the same machine, but you need to be extra careful not to mix up the environment.

Setting the Environment

To continue with the setup, we need to set the environment by creating a .cmd script. The following is adapted from several sources, and trimmed down to the minimum. Copy and paste it into a file named pyqgis.cmd and save it to a convenient location (like your HOME directory).

@echo off SET OSGEO4W_ROOT=C:\OSGeo4W3 call "%OSGEO4W_ROOT%"\bin\o4w_env.bat call "%OSGEO4W_ROOT%"\apps\grass\grass-7.4.0\etc\env.bat @echo off path %PATH%;%OSGEO4W_ROOT%\apps\qgis-dev\bin path %PATH%;%OSGEO4W_ROOT%\apps\grass\grass-7.4.0\lib path %PATH%;C:\OSGeo4W3\apps\Qt5\bin path %PATH%;C:\OSGeo4W3\apps\Python36\Scripts set PYTHONPATH=%PYTHONPATH%;%OSGEO4W_ROOT%\apps\qgis-dev\python set PYTHONHOME=%OSGEO4W_ROOT%\apps\Python36 set PATH=C:\Program Files\Git\bin;%PATH% cmd.exe

You should customize the set PATH statement to add any paths you want available when working from the command line. I added paths to my git install.

The last line starts a cmd shell with the settings specified above it. We'll see an example of starting an IDE in a bit.

You can test to make sure all is well by double-clicking on our pyqgis.cmd script, then starting Python and attempting to import one of the QGIS modules:

C:\Users\gsherman>python3 Python 3.6.0 (v3.6.0:41df79263a11, Dec 23 2016, 07:18:10) [MSC v.1900 32 bit (In tel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import qgis.core >>> import PyQt5.QtCore

If you don't get any complaints on import, things are looking good.

Installing pb_tool

Open your customized shell (double-click on pyqgis.cmd to start it) to install pb_tool:

python3 -m pip install pb_tool

Check to see if pb_tool is installed correctly:

C:\Users\gsherman>pb_tool Usage: pb_tool [OPTIONS] COMMAND [ARGS]... Simple Python tool to compile and deploy a QGIS plugin. For help on a command use --help after the command: pb_tool deploy --help. pb_tool requires a configuration file (default: pb_tool.cfg) that declares the files and resources used in your plugin. Plugin Builder 2.6.0 creates a config file when you generate a new plugin template. See http://g-sherman.github.io/plugin_build_tool for for an example config file. You can also use the create command to generate a best-guess config file for an existing project, then tweak as needed. Bugs and enhancement requests, see: https://github.com/g-sherman/plugin_build_tool Options: --help Show this message and exit. Commands: clean Remove compiled resource and ui files clean_docs Remove the built HTML help files from the... compile Compile the resource and ui files config Create a config file based on source files in... create Create a new plugin in the current directory... dclean Remove the deployed plugin from the... deploy Deploy the plugin to QGIS plugin directory... doc Build HTML version of the help files using... help Open the pb_tools web page in your default... list List the contents of the configuration file translate Build translations using lrelease. update Check for update to pb_tool validate Check the pb_tool.cfg file for mandatory... version Return the version of pb_tool and exit zip Package the plugin into a zip file suitable...

If you get an error, make sure C:\OSGeo4W3\apps\Python36\Scripts is in your PATH.

More information on using pb_tool is available on the project website.

Working on the Command Line

Just double-click on your pyqgis.cmd script from the Explorer or a desktop shortcut to start a cmd shell. From here you can use Python interactively and also use pb_tool to compile and deploy your plugin for testing.

IDE Example

By adding one line to our pyqgis.cmd script, we can start our IDE with the proper settings to recognize the QGIS libraries:

start "PyCharm aware of Quantum GIS" /B "C:\Program Files (x86)\JetBrains\PyCharm 3.4.1\bin\pycharm.exe" %*

We added the start statement with the path to the IDE (in this case PyCharm). If you save this to something like pycharm.cmd, you can double-click on it to start PyCharm. The same method works for other IDEs, such as PyDev.

Within your IDE settings, point it to use the Python interpreter included with OSGeo4W---typically at: %OSGEO4W_ROOT%\bin\python3.exe. This will make it pick up all the QGIS goodies needed for development, completion, and debugging. In my case OSGEO4W_ROOT is C:\OSGeo4W3, so in the IDE, the path to the correct Python interpreter would be: C:\OSGeo4W3\bin\python3.exe.

Make sure you adjust the paths in your .cmd scripts to match your system and software locations.

Workflow

Here is an example of a workflow you can use once you're setup for development.

Creating a New Plugin
  1. Use the Plugin Builder plugin to create a starting point [1]
  2. Start your pyqgis.cmd shell
  3. Use pb_tool to compile and deploy the plugin (pb_tool deploy will do it all in one pass)
  4. Activate it in QGIS and test it out
  5. Add code, deploy, test, repeat

Working with Existing Plugin Code

The steps are basically the same was creating a new plugin, except we start by using pb_tool to create a new config file:

  1. Start your pyqgis.cmd shell
  2. Change to the directory containing your plugin code
  3. Use pb_tool create to create a config file
  4. Edit pb_tool.cfg to adjust/add things create may have missed
  5. Start at step 3 in Creating a New Plugin and press on

Troubleshooting

Assuming you have things properly installed, trouble usually stems from an incorrect environment.

  • Make sure QGIS runs and the Python console is available and working
  • Check all the paths in your pygis.cmd or your custom IDE cmd script
  • Make sure your IDE is using the Python interpreter that comes with OSGeo4W

[1] Plugin Builder 3.x generates a pb_tool config file

Categories: OSGeo Planet

Blog 2 Engenheiros: Como organizar seus mapas no ArcGIS e QGIS?

Tue, 2018-02-13 07:07

Nem todas as pessoas são organizadas, e não é diferente para aquelas pessoas que passaram 5+ anos numa graduação.

Se eu te solicita-se material das aulas de Química Orgânica do seu curso de graduação, você teria condições de me entregar eles? Ou eles estariam perdidos, em uma pasta muito remota, que nem mesmo o Windows conseguiria achar.

Se você tem problemas de organização e trabalha com geoprocessamento, iremos dar dicas para você se organizar e não perder os arquivos do seu SIG.

Como é a sua área trabalho? Organizada ou uma bagunça?Como é a sua área trabalho? Organizada ou uma bagunça? Fonte: ATRL. Vantagens da Organização

Ao tornar-se uma pessoa organizada, você notará benefícios como passar menos tempo procurando seus arquivos e corrigindo erros. Logo, se você não perde tempo com isso, terá mais tempo para realizar atividades produtivas.

Não pense que as boas práticas de arquivamento de documentos se foram pois não utilizamos mais papeis.

Aaron Lynn, no site Asian Efficiency, apresenta algumas regras que devem ser seguidas para deixar seu computador organizado. São elas:

  • Não salve documentos na Área de Trabalho;
  • Limite a criação de novas pastas;
  • Acostume-se a pensar em hierarquias;
  • Crie uma pasta para projetos concluídos (arquivo morto).

Dentre essas regras, a ideia de hierarquia é a mais importante. Pois você deverá classificar seus documentos, por exemplo, como Pessoal ou Trabalho; dentro da pasta Trabalho, você pode ainda ter outras classes, tais como Projetos em Andamento, Projetos Concluídos e Documentos Administrativos.

Exemplo de organização usando hierarquia para uma pasta de trabalho.Exemplo de organização usando hierarquia para uma pasta de trabalho.

Parece complicado? Se você se acostumar a ser organizado, as coisas começarão a fluir facilmente. Ainda parece difícil? Segue alguns sites que podem te auxiliar a ser mais organizado:

Organização no SIG

Agora vamos aplicar um pouco dessas ideias para gerenciar nossos arquivos do nosso Sistema de Informações Geográficas. Assista nosso vídeo e descubra.

Siga nossas dicas e você será mais organizado. Organização é uma habilidade que desenvolvemos com o tempo, portanto, não desista.

E caso você tenha alguma sugestão de organização, deixa ela nos comentários.

Categories: OSGeo Planet

Free and Open Source GIS Ramblings: TimeManager 2.5 published

Mon, 2018-02-12 21:39

TimeManager 2.5 is quite likely going to be the final TimeManager release for the QGIS 2 series. It comes with a couple of bug fixes and enhancements:

  • Fixed #245: updated help.htm
  • Fixed #240: now hiding unmanageable WFS layers
  • Fixed #220: fixed issues with label size
  • Fixed #194: now exposing additional functions: animation_time_frame_size, animation_time_frame_type, animation_start_datetime, animation_end_datetime

Besides updating the help, I also decided to display it more prominently in the settings dialog (similarly to how the help is displayed in the field calculator or in Processing):

So far, I haven’t started porting to QGIS 3 yet. If you are interested in TimeManager and want to help, please get in touch.

On this note, let me leave you with a couple of animation inspirations from the Twitterverse:

Here's my attempt, Thanks @tjukanov for the inspiration and @dig_geo_com for the Tutorial. Distances traveled in 2,5,7,9,11,13,15 minutes in Cluj-Napoca, Romania. Made with @QGIS#TimeManager,@openstreetmap, data from @here pic.twitter.com/tEhXfy6CAs

— vincze istvan (@spincev) February 3, 2018

More typographic experiments with Danish ship data.
– 24 hours of ship traffic
– Size of ship name defined by a combination of vessel size and speed
– Direction of the name is defined by vessel's course pic.twitter.com/2p16Qo1Kuf

— Topi Tjukanov (@tjukanov) January 11, 2018

How the City of #Zurich has grown since 1875.

Made with @QGIS#TimeManager.
Thx to @underdarkGIS and @tjukanov for the inspiration.#geogiffery #DataViz #Map #cartography pic.twitter.com/TY2flppsVA

— Marco Sieber (@DonGoginho) January 17, 2018

Categories: OSGeo Planet

gvSIG Team: Asociación gvSIG participará en el ILoveFS 2018

Mon, 2018-02-12 12:09

ilovefs-banner-extralarge

Por tercer año consecutivo Datalab organiza en MediaLab Prado el IloveFS18, un espacio de demostración de cariño y amor al Software Libre, como no, el 13 y el 14 de febrero.

La Asociación gvSIG estará presente mostrando que el SIG libre puede utilizarse para casi cualquier ámbito de nuestra vida. Facilitaremos un taller de gvSIG básico aplicado a la labor periodística para que todo el mundo pueda aprender Geomática Libre de manera práctica y divertida.

La cita será de 18:45 a 20:30 el martes 13 y de 18:45 a 20:00 el miércoles 14 en las instalaciones de MediaLab Prado.

Por supuesto, el evento y los talleres son completamente gratuitos y sólo necesitáis registraros en la web habilitada para ello.

Podéis ver la información completa en http://medialab-prado.es/article/ilovefs18

Y podéis descargar la versión portable para vuestro sistema operativo en http://www.gvsig.com/es/productos/gvsig-desktop/descargas

 

Categories: OSGeo Planet

CARTO Inside Blog: ETL into CARTO with ogr2ogr

Mon, 2018-02-12 10:39

The default CARTO data importer is a pretty convenient way to quickly get data into the platform, but for enterprises setting up automated update it has some limitations:

  • there’s no way to define type coercions for CSV data;
  • some common GIS formats like File Geodatabase aren’t supported;
  • the sync facility is “pull” only, so data behind a firewall is inaccessible; and,
  • the sync cannot automatically filter the data before loading it into the system.

Fortunately, there’s a handy command-line tool that can automate many common enterprise data loads: ogr2ogr.

Basic Operation

ogr2ogr has a well-earned reputation for being hard to use. The commandline options are plentiful and terse, the standard documentation page lacks examples, and format-specific documentation is hidden away with the driver documentation.

Shapefile

The basic structure of an ogr2ogr call is “ogr2ogr -f format destination source”. Here’s a simple shapefile load.

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -nln interesting \ -f Carto \ "Carto:pramsey" \ interesting_things.shp

The parameters are:

  • –debug turns on verbose debugging, which is useful during development to see what’s happening behind the scenes.
  • –config is used to pass generic “configuration parameters”. In this case we pass our CARTO API key so we are allowed to write to the database.
  • -t_srs is the “target spatial reference system”, telling ogr2ogr to convert the spatial coordinates to “EPSG:4326” (WGS84) before writing them to CARTO. The CARTO driver expects inputs in WGS84, so this step is mandatory.
  • -nln is the “new layer name”, so the name of the uploaded table can differ from that of the input file.
  • -f is the format of the destination layer, so for uploads to CARTO, it is always “Carto”.
  • Carto:pramsey is the “destination datasource”, so it’s a CARTO source, in the “pramsey” account. Change this to your user name. (Note for multi-user accounts: you must supply your user name here, not your organization name.)
  • interesting_things.shp is the “source datasource”, which for a shapefile is just the path to the file.
File Geodatabase

Loading a File Geodatabase is almost the same as loading a shapefile, except that a file geodatabase can contain multiple layers, so the conversion must also specify which layer to convert, by adding the source layer name after the data source. You can load multiple layers in one run by providing multiple layer names.

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -nln cities \ -f Carto \ "Carto:pramsey" \ CountyData.gdb Cities

In this example, we take the “Cities” layer from the county database, and write it into the “cities” table of CARTO. Note that if you do not re-map the layer name to all lower case, you’ll get a mixed case layer in CARTO, which you may not want.

Filtering

You can use OGR on any input data source to filter the data prior to loading. This can be useful for loads of large inputs that are “only the data since time X” or “only the data in this region”, like this:

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -nln cities \ -f Carto \ -sql "SELECT * FROM Cities WHERE state_fips = 53" \ "Carto:pramsey" \ CountyData.gdb Cities

Since the filter is just a SQL statement, the filter can both reduce the number of records and also apply transforms to the output on the way: reduce the number of columns, apply some data transformations, anything that is possible using the SQLite dialect of SQL.

Overwrite or Append

By default, ogr2ogr runs in “append” mode (you can force it with the -append flag, so if you run the same translation multiple times, you’ll get rows added into your table. This be useful for processes that regularly take the most recent entries and copy them into CARTO.

For translations where you want to replace the existing table, use the -overwrite mode, which will drop the existing table, and create a new one in its place.

Because of some limitations in how the OGR CARTO driver handles primary keys, the OGR -update mode does not work correctly.

OGR Virtual Format

As you can see, the command-line complexity of an OGR conversions starts high. The complexity only goes up as advanced features like filtering and arbitrary SQL are added.

To contain the complexity in one location, you can use the OGR “virtual format”, VRT files, to define your data sources. This is handy for managing a library of conversions in source control. Each data source becomes it’s own VRT file, and the actual OGR commands become smaller.

CSV Type Enforcement

CSV files are convenient ways of passing data, but they are under-defined: they supply column names, but not column types. This forces CSV consumers to do type guessing based on the input data, or to coerce every input to a lowest common denominator string type.

Particularly for repeated and automated uploads it would nice to define the column types once beforehand and have them respected in the final CARTO table.

For example, take this tiny CSV file:

longitude,latitude,name,the_date,the_double,the_int,the_int16,the_int_as_str,the_datetime -120,51,"First Place",2018-01-01,2.3,123456789,1234,00001234,2014-03-04 08:12:23 -121,52,"Second Place",2017-02-02,4.3,423456789,4234,00004234,"2015-05-05 09:15:25"

Using a VRT, we can define a CSV file as a source, and also add the rich metadata needed to support proper type definitions:

<OGRVRTDataSource> <OGRVRTLayer name="test_csv"> <SrcDataSource>/data/exports/test_csv.csv</SrcDataSource> <GeometryField encoding="PointFromColumns" x="longitude" y="latitude"/> <GeometryType>wkbPoint</GeometryType> <LayerSRS>WGS84</LayerSRS> <OpenOptions> <OOI key="EMPTY_STRING_AS_NULL">YES</OOI> </OpenOptions> <Field name="name" type="String" nullable="false" /> <Field name="a_date" type="Date" src="the_date" nullable="true" /> <Field name="the_double" type="Real" nullable="true" /> <Field name="the_int" type="Integer" nullable="true" /> <Field name="the_int16" type="Integer" subtype="Int16" nullable="true" /> <Field name="the_int_as_str" type="String" nullable="true" /> <Field name="the_datetime" type="DateTime" nullable="true" /> </OGRVRTLayer> </OGRVRTDataSource>

This example has a number of things going on:

  • The <SrcDataSource> is an OGR connection string, as defined in the driver documentation for the format. For a CSV, it’s just the path to a file with a “csv” extension.
  • The <GeometryField> line maps coordinate columns into a point geometry.
  • The <LayerSRS> confirms the coordinates are WGS84. They could also be some planar format, and OGR can reproject them if requested.
  • The <OpenOptions> let us pass one of the many CSV open options.
  • The <Field> type definitions, using the “type” attribute to explicitly define types, including obscure ones like 16-bit integers.
  • Column renaming, in the “a_date” <Field>, maps the source column name “the_date” to “a_date” in the target.
  • Null enforcement, in the “name” <Field>, creates a target column with a NOT NULL constraint.

To execute the translation, we use the VRT as the source argument in the ogr2ogr call.

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -f Carto \ "Carto:pramsey" \ test_csv.vrt Database-side SQL

Imagine you have an Oracle database with sales information in it, and you want to upload a weekly snaphot of transactions. The database is behind the firewall, and the transactions need to be joined to location data in order to mapped. How to do it?

With VRT tables and the OGR Oracle driver, it’s just some more configuration during the load step:

<OGRVRTDataSource> <OGRVRTLayer name="detroit_locations"> <SrcDataSource>OCI:scott/password@ora.company.com</SrcDataSource> <LayerSRS>EPSG:26917</LayerSRS> <GeometryField encoding="PointFromColumns" x="easting" y="northing"/> <SrcSQL> SELECT sales.sku, sales.amount, sales.tos, locs.latitude, locs,longitude FROM sales JOIN locs ON sales.loc_id = locs.loc_id WHERE locs.city = 'Detroit' AND sales.transaction_date > '2018-01-01' </SrcSQL> </OGRVRTLayer> </OGRVRTDataSource>

Some things to note in this example:

  • The <SrcDataSource> holds the Oracle connection string
  • The coordinates are stored in UTM17, in northing/easting columns, but we can still easily map them into a point type for reprojection later.
  • The output data source is actually the result of a join executed on the Oracle database, attributing each sale with the location it was made. We don’t have to ship the tables to CARTO separately.

The ability to run any SQL on the source database is a very powerful tool to ensure that the uploaded data is “just right” before it arrives on the CARTO side for analysis and display.

As before, the VRT is run with a simple execution of the ogr2ogr command line:

ogr2ogr \ --debug ON \ --config CARTO_API_KEY abcdefghijabcdefghijabcdefghijabcdefghij \ -t_srs "EPSG:4326" \ -f Carto \ "Carto:pramsey" \ test_oracle.vrt Multi-format Sources

Suppose you have a source of attribute information and a source of location information, but they are in different formats in different databases: how to bring them together in CARTO? One way, as usual, would be to upload them separately and join them on the CARTO side with SQL. Another way is to use the power of ogr2ogr and VRT to do the join during the data upload.

For example, imagine having transaction data in a PostgreSQL database, and store locations in a Geodatabase. How to bring them together? Here’s a joined_stores.vrt file that does the join in ogr2ogr:

<OGRVRTDataSource> <OGRVRTLayer name="sales_data"> <SrcDataSource>Pg:dbname=pramsey</SrcDataSource> <SrcLayer>sales.sales_data_2017</SrcLayer> </OGRVRTLayer> <OGRVRTLayer name="stores"> <SrcDataSource>store_gis.gdb</SrcDataSource> <SrcLayer>Stores</SrcLayer> </OGRVRTLayer> <OGRVRTLayer name="joined"> <SrcDataSource>joined_stores.vrt</SrcDataSource> <SrcSQL dialect="SQLITE"> SELECT stores.*, sales_data.* FROM sales_data JOIN stores ON sales_data.store_id = stores.store_id </SrcSQL> </OGRVRTLayer> </OGRVRTDataSource>

Some things to note:

  • The “joined” layer uses the VRT file itself in the <SrcDataSource> definition!
  • Each <OGRVRTLayer> is a full-fledged VRT layer, so you can do extra processing in them. Apply type definitions to CSV, run complex SQL on a remote database, whatever you want.
  • The join layer uses the “SQLite” dialect, so anything available in SQLite is available to you in the join step.
Almost an ETL

Combining the ability to read and write from multiple formats with the basic functionality of the SQLite engine, and chaining operations through multi-layer VRT layers, ogr2ogr provides the core functions of an “extract-transform-load” engine, in a package that is easy to automate and maintain.

For users with data behind a firewall, who need more complex processing during their loads, or who have data locked in formats the CARTO importer cannot read, ogr2ogr should be an essential tool.

Getting ogr2ogr

OGR is a subset of the GDAL suite of libraries and tools, so you need to install GDAL to get ogr2ogr

  • For Linux, look for “gdal” packages in your Linux distribution of choice.
  • For Mac OS X, use the GDAL Framework builds.
  • For Windows, use the MS4 package system or pull the Mapserver builds from GisInternals and use the included GDAL binaries.
Categories: OSGeo Planet

gvSIG Team: GIS applied to Municipality Management: Module 11 ‘Reprojecting vector layers’

Mon, 2018-02-12 08:29

The video of the eleventh module is now available, in which we will show how to reproject vector layers.

Sometimes municipalities need external geographic information to work, for example cartography published by another administration, such as regional or national. That cartography can be in a different system than technicians usually work on in the municipality. If we don’t take the reference systems into account, both cartographies would not be overlapped correctly.

The municipality technicians can also use old cartography, which is in an obsolete reference system, and they need to have it in an updated reference system. For this, it will be necessary to reproject that cartography.

In module 2 you can consult all the information related to the reference systems.

Apart from reprojecting from one reference system to another one, sometimes it will be necessary to apply a transformation to improve the reprojection. For example in the case of Spain, to reproject a layer available in ED50, the official reference system until a few years ago, to ETRS89, the official system currently, it is necessary to apply a transformation by grid, otherwise we would have a difference of about 7 meters between these layers.

The cartography to follow this video can be downloaded from this link.

Here you have the videotutorial of this new module:

Related posts:

Categories: OSGeo Planet

From GIS to Remote Sensing: Available the User Manual of the Semi-Automatic Classification Plugin v. 6

Sat, 2018-02-10 15:34
I've updated the user manual of the Semi-Automatic Classification Plugin (SCP) for the new version 6.This updated version contains the description of all the tools included in SCP as well as a brief introduction to the remote sensing definitions, and the first basic tutorial.
The user manual in English is available at this link. Also, other languages are available, although some translations are incomplete. 
I'd like to deeply thank all the volunteers that have translated the previous versions of this user manual, and I invite you to help translating this new version to your language.
It is possible to easily translate the user manual to any language, because it is written in reStructuredText as markup language (using Sphinx). Therefore, your contribution is fundamental for the translation of the manual to your language.


Categories: OSGeo Planet

Jackie Ng: One MapGuide/FDO build system to rule them all?

Fri, 2018-02-09 16:38
Before the bombshell of the death of Autodesk Infrastructure Map Server was dropped, I was putting the final finishing touches of making (pun intended) the MapGuide/FDO build experience on Linux a more pleasant experience.

Namely, I had it with autotools (some may describe it as autohell) as the way to build MapGuide/FDO on Linux. For FDO, this was the original motivation for introducing CMake as an alternative. CMake is a more pleasant way to build software on Linux over autotools because:
  1. CMake builds your sources outside of the source tree. If you're doing development on a SVN working copy this is an absolute boon as it means when it comes time to commit any changes, you don't have to deal with sifting through tons of autotools-generated junk that is left in your actual SVN wc.
  2. CMake builds are faster than their autotools counterpart.
  3. It is much easier to find and consume external libraries with CMake than it is through autotools, which makes build times faster because we can just source system-installed copies of thirdparty libraries we use, instead of waste time having to build these copies (in our internal thirdparty source tree) ourselves. If we are able to use system-installed copies of libraries when building FDO, then we can take advantage of SVN sparse checkouts and be able to skip downloading whole chunks of thirdparty library sources that we never have to build!
Sadly, while this sounds nice in theory, the CMake way to build FDO had fallen into a state of disrepair. My distaste for autotools was sufficient motivation to get the CMake build back into working condition. Several weeks of bashing at various CMakeLists.txt files later, the FDO CMake build was operational again and had some several major advantages over the autotools build (in addition to what was already mentioned):
  • We can setup the CMake to generate build configurations for Ninja instead of standard make. A ninja-powered CMake build is faster than standard make ^.
  • On Ubuntu 14.04 LTS (the current Ubuntu version we're targeting), all the thirdparty libraries we use were available for us to apt-get install in the right version ranges, and the CMake build can take advantage of all of them. Not a single internal thirdparty library copy needs to be built!
  • We can easily light up compiler features like AddressSanitizer and linking with the faster gold instead of ld. AddressSanitizer in particular easily helped us catch some issues that have flew under the radar.
  • All of the unit tests are build-able and more importantly ... executable outside the source tree, making it easier to fix up whatever was failing.
Although we now had a functional FDO CMake build. MapGuide still was built on Linux using autotools. So for the same reasons and motivations, I started the process of introducing CMake to the MapGuide build system for Linux.

Unlike FDO, MapGuide still needed some of the internal thirdparty libraries built.
  • DBXML - No ubuntu package available, though we can get it to build against a system-provided version of xerces, so we can at least skip building that part of DBXML.
  • Apache HTTPD - Ubuntu package available, but having MapGuide be able to integrate with an Ubuntu-provided httpd installation was not in the scope of this work, even though this is a nice thing to have.
  • PHP - Same reasons as Apache HTTPD
  • Antigrain Geometry - No ubuntu package available. Also the AGG sources are basically wedded to our Renderers project anyways.
  • DWF Toolkit - No ubuntu package available
  • CS-Map - No ubuntu package available
For everything else, Ubuntu provided the package in the right version ranges for CMake to take advantage of. Another few weeks of bashing various CMakeLists.txt files into shape and we had FDO and MapGuide both build-able on Linux via CMake. To solve the problem of still needing to build some internal thirdparty libs, but still be able to retain the CMake quality of everything is built outside the source tree, some wrapper shell scripts are provided that will copy applicable thirdparty library sources out of the current directory, build them in their copied directories and then invoke CMake and pass in all the required parameters so that it will know where to look for the internal libraries to link against when it comes to build MapGuide proper.

This was also backported to FDO, so that on distros where we do not have all our required thirdparty libraries available, we can selectively build internal copies and be able to find/link the rest, and have CMake take care of all of that for us.

So what's with the title of this post?

Remember when I wrote about how interesting vcpkg was?

What is best used with vcpkg to easily consume thirdparty libraries on Windows? Why CMake of course! Now building MapGuide on Windows via CMake is not on the immediate horizon. We'll still be maintaining Visual Studio project files by hand (instead of auto-generating them with CMake) for the immediate future, but can you imagine being able to build FDO and MapGuide on both Windows and Linux with CMake and not have to waste time on huge SVN checkouts and building thirdparty libraries? That future is starting to look real possible now!

For the next major release of MapGuide Open Source, it is my plan to use CMake over autotools as the way to build both MapGuide and FDO on Linux.

^ Well, the ninja-powered CMake build used to be blazing fast until Meltdown and Spectre happened. My dev environment got the OS security patches and whatever build performance gains that were made through ninja and CMake were instantly wiped out and we were back to square one in terms of build time. Still, the autotools build performed worse after the meltdown patches, so while CMake still beats the autotools build in terms of build time, we ultimately gained nothing on this front.

Thanks Intel!!!
Categories: OSGeo Planet

gvSIG Team: GIS applied to Municipality Management: Module 10 ‘How to convert cartography from CAD to GIS’

Thu, 2018-02-08 09:48

The video of the tenth module is now available, in which we will show how to load and manage cartography in CAD format on gvSIG.

Many municipalities have their geographic information in CAD format, and in many cases there’s an only file for the whole municipality that contains all type of information, such as power lines, parcels, drinking water system, sewage system…, each one in a different layer.

It sometimes makes it difficult to manage, even we have to divide the municipality into sheets to manage that information, where we lose information of our municipality as a group. In that case, to make queries, calculations…, we would have to open the different files.

The advantage of working with a Geographic Information System is that each type of information would be available in a different file (that would be the optimal way to work), and we would be able to overlap the different files (which would be ‘layers’ in our GIS) in the same View to be able to make analysis, consultations…

Another important advantage is that the vector layers in a GIS have an associated attribute table, and on the .SHP format, the most common in GIS, we can add all the fields that we want to that attribute table (length, area, owner, release date…). We will have a great amount of alphanumeric information of the different elements.

By having alphanumeric information it is easy, for example, to know the areas of all the parcels of our municipality at the same time, we wouldn’t have to select them individually like in a CAD. We could also make inquiries about them. For example we can make a query of parcels the area of which is larger than 1000 square meters with a simple sentence, where they would appear selected directly.

The cartography to follow this video can be downloaded from this link.

Here you have the videotutorial of this new module:

Related posts:

Categories: OSGeo Planet

GIS for Thought: QGIS Multi Ring Buffer Plugin Version 1

Wed, 2018-02-07 23:23

After about 3 years of existing. I am releasing version 1 of the Multi Ring Buffer Plugin.

QGIS plugin repository.

With version 1 comes a few new features:

  • Ability to choose the layer to buffer after launching the plugin
  • Ability to supply buffer distances as comma separated text string
  • Ability to make non-doughnut buffers
  • Doughnut buffer:

    Non-doughnut buffer (regular):

    Categories: OSGeo Planet

    gvSIG Team: Adding new colour tables in gvSIG Desktop, more options to represent our data

    Wed, 2018-02-07 16:30

    The colour tables are used in gvSIG Desktop to represent both raster data (for example, a Digital Elevation Model) and vector data (they can be applied in legends such as unique values or heat maps). By default gvSIG Desktop has a small catalog of colour tables. But most of the users don’t know that it’s very easy to add new colour tables. Do you want to see how easy it is?

    First of all you have to know that the colour tables used by gvSIG Desktop are stored as xml files in the ‘colortable’ folder, inside the ‘gvSIG’ folder. So, if you delete some of these xml, those tables will no longer be available in gvSIG Desktop.

    Let’s see now how we can add new colour tables in gvSIG Desktop.

    To download new colour tables we will access to this website:

    http://soliton.vm.bytemark.co.uk/pub/cpt-city/

    As you will see that website contains hundreds of colour tables, many of them applicable to the world of cartography that can be downloaded in a wide variety of formats, including ‘ggr’ (GIMP gradient) format, supported by gvSIG Desktop. We will download some of the colour tables offered in that ‘ggr’ format.

    We launch the tool ‘Colour table’ in gvSIG Desktop and in the new window we press the button ‘Import library’ … .then we select the ggr files that we have downloaded and we already have them available.

    Finally, in the video we will see how they can be applied to raster and vector data once imported.

    And if you want to download ALL the colour tables in ‘ggr’ format and in a zipped file… they are available here.

    Categories: OSGeo Planet

    gvSIG Team: Webinar on “gvSIG Suite: open source software for geographic information management in agriculture – Technology and case studies” (February 15)

    Wed, 2018-02-07 12:16

    GODAN and gvSIG Association invite you to join the webinar about “gvSIG Suite: open source software for geographic information management in agriculture – Technology and case studies“, in February 15th at 2PM GMT.

    This webinar will deal with the gvSIG Suite, the whole catalog of open source software solutions offered by the gvSIG Association, and case studies about forestry and agriculture of the different products.

    With free registration, this event is appropriate for all users interested in knowing how to work with an open source Geographic Information System in agriculture and forestry sectors.

    We will speak about gvSIG Desktop, the Desktop GIS to manage geographic information and make vector and raster analysis, gvSIG Mobile, for field data gathering with mobile devices, and gvSIG Online, an integrated platform for Spatial Data Infrastructure, to create geoportals in an easy way and manage cartography between different departments in an organization.

    Attendees will be able to interact with the speakers by sending their comments and questions through chat.

    Registrations are available from: https://app.webinarjam.net/register/24718/18244a0afc

    The webinar details are:

    Categories: OSGeo Planet

    gvSIG Team: Añadir nuevas tablas de color a gvSIG Desktop, ampliando las opciones para representar nuestros datos

    Wed, 2018-02-07 11:16

    Las tablas de color se utilizan en gvSIG Desktop tanto para representar datos ráster (por ejemplo, un Modelo Digital del Terreno) como para representar datos vectoriales (se pueden aplicar en leyendas como la de valores únicos o en la de mapas de calor). Por defecto gvSIG Desktop tiene un pequeño catálogo de tablas de color. Lo que la mayor parte de usuarios no sabe es que es muy sencillo añadir tablas de color nuevas. ¿Queréis ver lo fácil que es?

    En primer lugar debéis saber que las tablas de color que usa gvSIG Desktop se almacenan como ficheros xml en la carpeta ‘colortable’, dentro de la carpeta ‘gvSIG’. Así, si por ejemplo borráis algunos de estos xml, esas tablas dejaran de estar disponibles en gvSIG Desktop.

    En el vídeo demo que acompaña a este post hemos borrado todas menos la denominada ‘Default’, que siempre debéis tener la precaución de no borrar. Como se muestra en el vídeo, al borrar los xml tan sólo nos queda una tabla de color que podamos aplicar a nuestras capas ráster y vectoriales.

    Veamos ahora la parte interesante de verdad que no es como borrar tablas de color ya existentes sino añadir otras nuevas.

    Para descargar tablas de color nuevas vamos a utilizar esta web:

    http://soliton.vm.bytemark.co.uk/pub/cpt-city/

    Como veréis contiene cientos de tablas de color, muchas de ellas aplicables al mundo de la cartografía y descargables en una amplia diversidad de formatos, incluidos algunos como el ‘ggr’ (GIMP gradient) soportados por gvSIG Desktop. De los cientos de tablas de color que ofrece la página vamos a descargar algunos de ellos en este formato ‘ggr’.

    Lanzamos la herramienta de ‘Tabla de color’ en gvSIG Desktop y en la ventana que nos aparece pulsamos el botón de ‘importar librerías’….a continuación seleccionamos los ficheros ggr que nos hemos descargada y voilà!…ya las tenemos disponibles.

    Finalmente en el vídeo demostrativo veremos como una vez importadas se pueden aplicar tanto a los datos ráster como vectoriales.

    Y si queréis descargar TODAS las tablas de color en formato ggr y en un fichero comprimido…las tenéis disponibles aquí.

    Categories: OSGeo Planet