Welcome to the 3D City Database User Manual!

The 3D City Database (3DCityDB) is a free and open source package consisting of a database schema and a set of software tools to import, export, manage, analyse, and visualize virtual 3D city models on top of a spatial database system. This user manual provides the documentation of the 3D City Database Suite 2022.0 consisting of the following components.

Component Version
3D City Database 4.2
5.1
4.0
2.0
3D Web Map Client 1.9
Web Feature Service 5.1

Contents:

First steps

This quickstart guide gives step-by-step instructions for setting up a 3D City Database instance and installing the Importer/Exporter. Installers for the 3D City Database components can be downloaded from the official release website or from the project website at https://www.3dcitydb.org. The source code of the 3D City Database is hosted and maintained on GitHub at https://github.com/3dcitydb.

System requirements

3D City Database

Setting up an instance of the 3D City Database requires an existing installation of a PostgreSQL or Oracle database.

PostgreSQL
Supported versions are PostgreSQL 9.6 and higher with the PostGIS extension 2.3 and higher. Please also make sure to always install the latest patches and updates.
Oracle
Supported versions are Oracle 10g R2 and higher. The 3D City Database requires spatial data support provided either through the Oracle Spatial or Locator extension. It is highly recommended to install available patches to avoid unexpected errors and to benefit from the latest functionality. For Oracle 10g R2, at least patch set 10.2.0.4.0 is required for using the KML/COLLADA/glTF export capabilities.

The SQL scripts for creating the 3DCityDB schema are written to be executed by the default command-line client of either database system – namely psql for PostgreSQL and SQL*Plus for Oracle. The scripts include meta commands specific to these clients and would not work properly when using a different client software. So please make sure psql or SQL*Plus is installed on the machine from where you want to set up the 3D City Database.

Importer/Exporter tool

The Importer/Exporter tool can run on any platform providing support for Java 8 and higher. The recommended version for running the Importer/Exporter is Java 11. The tool has been successfully tested on (but is not limited to) the following operating systems:

  • Microsoft Windows XP, Vista, 7, 8, 10, 11;
  • Apple Mac OS X and macOS;
  • Ubuntu Linux 9 to 21.

Prior to the setup of the Importer/Exporter tool, a Java Runtime Environment (JRE) must be installed on your system. Java installation packages are offered and maintained by different vendors. The following list provides a non-exhaustive overview of Java distributions that are free to download and use:

Follow the installation instructions for your operating system. Note that starting from Java 17 long-term support (LTS) version, Oracle Java is again released under a no-fee and free-to-use license, while use of previous versions of Oracle Java remains restricted to a fee-based subscription license. Likewise, Java binaries from other vendors like Azul Zulu or Red Hat OpenJDK might require a license for commercial use or access to updates. Please carefully review the license terms and conditions of use provided by the vendors.

The Importer/Exporter is shipped with an installer that will guide you through the steps of the setup process. A full installation of the Importer/Exporter including database scripts, plugins and test datasets requires approx. 280 MB of hard disk space. Installing only the mandatory application files will use approx. 160 MB of hard disk space. Installation packages can be selected during the setup process.

The Importer/Exporter runs with 1 GB of main memory per default. The maximum available main memory is controlled by JVM default options. This setting should be reasonable on most platforms and for most import/export processes. If required, you can manually adapt the main memory limits in the starter script of the program or by using environment variables. Please refer to Section 4.1 for more details.

Installation of the Importer/Exporter

Download the installer 3DCityDB-Importer-Exporter-{version}-Setup.jar from the GitHub release section or from the 3D City Database website at https://www.3dcitydb.org and save it to your local file system. In addition to the Importer/Exporter tool, the installer also bundles the 3DCityDB setup and database scripts, the 3D Web Map Client, optional plugins for the Importer/Exporter, and test datasets.

The installer is shipped as executable Java Archive (JAR) file. A setup wizard will guide you through the steps of the installation process. The wizard can be run with a graphical user interface (GUI) and as command-line version. In addition, the installer offers an unattended installation mode to enable automatic installation workflows.

GUI setup wizard

Simply double-click on the installer file to launch the setup wizard in GUI mode. This should work fine on most machines. Alternatively, you can execute the wizard from the command line with the following command.

$ java -jar 3DCityDB-Importer-Exporter-{version}-Setup.jar

Once the wizard has started, click through the steps to accept the license agreement and to specify the installation directory for the Importer/Exporter. Afterwards, the wizard lets you choose the optional software packages that should be installed.

_images/first_step_impexp_installation_wizard.png

Installation wizard of Importer/Exporter tool.

It is recommended to at least pick the 3D City Database package that contains all shell and SQL scripts required for setting up an instance of the 3D City Database on your spatial database system. Please refer to Section 1.3 for a step-by-step guide on how to use the scripts. The Sample CityGML and KML/COLLADA datasets package contains license-free sample data that may be used in your first tests.

The Plugins option installs plugins for the Importer/Exporter, which add further functionality to the tool. This release is shipped with the Spreadsheet Generator Plugin and the ADE Manager Plugin. More plugins may be added in future releases.

The 3D Web Map Client is a web-based viewer for 3DCityDB content and provides high-performance 3D visualization and interactive exploration of arbitrarily large semantic 3D city models on top of the open source Cesium Virtual Globe.

After successful installation, the contents of all selected installation packages are available from the installation directory. To run the Importer/Exporter, simply use the starter script located in the installation directory. More information on how to run the software in GUI or CLI mode is provided in Section 4.1.

Note

Before the Importer/Exporter can be used to connect to a PostgreSQL or Oracle database, please first follow the instructions in Section 1.3 to set up the 3D City Database schema on your database.

The installation directory contains the following subfolders:

Contents of the installation directory
Folder/File Optional Explanation
3dcitydb x Contains all shell and SQL scripts and stored procedures for operating the 3DCityDB
3d-web-map-client x Contains a ZIP archive containing all files required to install the 3D Web Map Client on a web server
ade-extensions   Contains extension packages to support CityGML ADEs. ADE extensions must only be copied into this directory to make them available in the program
bin   Contains the platform-specific script impexp that allows you to run the Importer/Exporter from the command line
contribs   Third-party tools required by the Importer/Exporter (e.g. collada2gltf converter binaries)
lib   Contains all libraries required by the Importer/Exporter
licence   Contains the license files of the Importer/Exporter
plugins   Contains plugins of the Importer/Exporter. Plugins only have to be copied into this directory to make them available in the program.
samples x Contains CityGML and KML/COLLADA/glTF test datasets
templates   Contains HTML templates for information balloons for KML/COLLADA/glTF exports, a selection of coordinate reference systems in the form of XML documents, and example XSLT stylesheets to be used in imports and exports.
uninstaller   Contains a JAR executable that uninstalls the Importer/Exporter
3DCityDB-Importer- Exporter   Platform-specific starter script to launch the Importer/Exporter with a graphical user interface. For instance, under Windows, simply double-click this script to run the program
README.txt   A brief information about the application

Command-line installation

The setup wizard can alternatively be run in a full headless mode, i.e., without a graphical user interface. This is useful, for instance, if the target computer does not offer a graphical user interface or in case the installation is done in a remote session, e.g. via SSH or similar means.

To launch the installer in console mode rather than in GUI mode, simply use the -console option as shown below.

$ java -jar 3DCityDB-Importer-Exporter-{version}-Setup.jar -console

Similar to the GUI mode, the setup wizard guides you through the steps of the installation process and user input is required at each step to complete the installation. For example, you can also choose from the optional software packages like in the GUI mode.

Unattended installation

Instead of installing the Importer/Exporter in an interactive session using the setup wizard, you can also automatically install and deploy the software on multiple machines.

The following steps provide a simple way to build and deploy a default installation:

  1. Install the Importer/Exporter once using the setup wizard in GUI or in CLI mode as described above. Make sure to select all software packages required for your default installation.
  2. Create a zip archive from the installation directory.
  3. Copy the zip archive to the target machine(s) and unzip it to the destination folder.

Alternatively, you can conduct an automatic installation by using an XML-based setup script. The advantage of this approach is that you can adapt the setup script for each target machine. One option to create a setup script is to run the setup wizard once in GUI mode. When you conclude the installation and before you close the wizard, you can save your installation settings to a file by clicking the Generate an automatic installation script button.

You can also use the following template script instead.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<AutomatedInstallation langpack="eng">
  <com.izforge.izpack.panels.HelloPanel id="hello"/>
  <com.izforge.izpack.panels.InfoPanel id="info"/>
  <com.izforge.izpack.panels.LicencePanel id="license"/>
  <com.izforge.izpack.panels.TargetPanel id="target">
    <installpath>path/to/installation/directory</installpath>
  </com.izforge.izpack.panels.TargetPanel>
  <com.izforge.izpack.panels.TreePacksPanel id="packs">
    <pack index="0" name="core" selected="true"/>
    <pack index="1" name="3dcitydb" selected="false"/>
    <pack index="2" name="3d-web-map-client" selected="false"/>
    <pack index="3" name="samples" selected="false"/>
    <pack index="4" name="plugins" selected="false"/>
    <pack index="5" name="plugin.spreadsheet.generator" selected="false"/>
    <pack index="6" name="plugin.ade-manager" selected="false"/>
  </com.izforge.izpack.panels.TreePacksPanel>
  <com.izforge.izpack.panels.SummaryPanel id="summary"/>
  <com.izforge.izpack.panels.InstallPanel id="install"/>
  <com.izforge.izpack.panels.ShortcutPanel id="shortcut"/>
  <com.izforge.izpack.panels.FinishPanel id="finish"/>
</AutomatedInstallation>

The installation directory is mandatory input and must be provided as value of the <installpath> element (see line 7). In addition, the software packages to be installed can be defined by setting the selected attribute of the corresponding <pack> element to either true or false (lines 10-16).

Note

  • The core package (l. 10) is mandatory and cannot be deselected.
  • The plugins package (l. 14) is just a container entry for the different plugins. It therefore makes no difference whether it is selected or not.
  • The rest of the template file may not be changed.

Once you have completed the setup script, copy it together with the installer to the target machine. On the target machine, run the installer from the command line and provide the setup script as argument. Assuming your setup script is named auto-install.xml, use the following command to run the automatic installation.

$ java -jar 3DCityDB-Importer-Exporter-{version}-Setup.jar auto-install.xml

Setting up the 3DCityDB schema

The required scripts for setting up the 3D City Database can be found in the installation directory of the Importer/Exporter within the 3dcitydb/postgresql/ and 3dcitydb/oracle/ subfolders.

Shell Scripts

The 3D City Database is shipped with shell scripts for both Windows and UNIX/Linux/macOS. These shell scripts allow for setting up a new 3DCityDB instance but also cover additional management tasks like dropping a 3DCityDB instance or granting access rights to another database user. The scripts are interactive and prompt the user for all mandatory input. It is also possible to use the scripts in continuous integration workflows. The following table provides an overview of the different shell scripts provided in the ShellScripts/Windows and ShellScripts/Unix subfolders:

Overview of all shell scripts for PostgreSQL and Oracle
File PgSQL Oracle Explanation
CONNECTION_DETAILS x x Sets database connection details
CREATE_DB x x Runs all scripts for creating the relational schema of a 3DCityDB incl. database types and functions
CREATE_SCHEMA x   Creates an additional 3DCityDB instance in a separate schema within the same database
DROP_DB x x Deletes all elements of the 3DCityDB
DROP_SCHEMA x   Removes a given database schema that contains a 3DCityDB instance
GRANT_ACCESS x x Grants read-only or read-write access on the 3DCityDB for a given user
REVOKE_ACCESS x x Revokes access rights for a given user
MIGRATE_DB x x Migrate an instance of the 3DCityDB from v2 or v3 to v4
UPGRADE_DB x x Upgrade an instance of the 3DCityDB v4 to the latest version

Note

Prior to executing the shell scripts, the database connection details must be set in the CONNECTION_DETAILS script. Check the installation steps below for examples.

The shell scripts can typically be executed with a double click. On some UNIX/Linux distributions though, you will have to run the .sh scripts from within a shell environment. Please open your favorite shell and check whether execution permission is set for the shell script. Change to the location of the shell scripts and enter the following to make them executable for the owner of the file:

$ chmod u+x CREATE_DB.sh

Afterwards, simply run the shell script by typing:

$ ./CREATE_DB.sh

SQL Scripts

The shell scripts are user-friendly wrappers for SQL scripts that actually implement the database actions to set up or drop a 3DCityDB instance. You can also directly invoke the SQL scripts or use them in your workflows. In this case, please make sure to pass all required input to the scripts. The SQL scripts are organized into the following folders beneath the SQLScripts directory:

SCHEMA
Includes SQL files for creating the 3D City Database database schema including tables, constraints, datatypes, and indexes. The schema scripts are automatically generated from the schema modelling tools pgModeler (PostgreSQL) and JDeveloper (Oracle) (with minor manual edits).
CITYDB_PKG
Contains scripts that create database objects and stored procedures mainly used by the Importer/Exporter tool. They are written in PL/pgSQL (PostgreSQL) and PL/SQL (Oracle) and are grouped by the type of the implemented operation (data manipulation, maintenance etc.). The available functions and procedures are discussed in the Stored Procedures chapter.
UTIL

This folder assembles different database management utilities:

  • Grant and revoke access rights to and from a 3D City Database instance (cf. Section 3.4.2)
  • Create additional database schemas with a 3D City Database schema (PostgreSQL only, cf. Section 3.4.1)
  • Enable or disable versioning (execution can be time-consuming, Oracle only)
  • Update table statistics for spatial columns (PostgreSQL only)
MIGRATION
Provides migration scripts to update a 3DCityDB instance of a previous major version to the latest 3DCityDB version (e.g., from version 3.x to 4.x) and upgrade scripts for updating 3DCityDB instances of the same major version (e.g., from version 4.0 to 4.1). See Migration chapter for more details.

Installation steps on PostgreSQL

Step 1 - Create an empty PostgreSQL database

Choose a superuser or a user with the CREATEDB privilege to create a new database on the PostgreSQL server (e.g. ‘citydb_v4’). Choose or create a user as owner of this new database who will also set up the 3D City Database schema. In the following steps, this user is called ‘citydb_user’. If you want to set up the schema with a user who is not the database owner, you have to grant this user proper permissions.

Connect to the database and type

CREATE DATABASE citydb_v4 OWNER citydb_user;

or use a graphical database client such as pgAdmin that is shipped with PostgreSQL. Please check the pgAdmin documentation for more details.

Step 2 – Add the PostGIS extension

The 3D City Database requires the PostGIS extension to be added to the database. This can only be done as superuser. The extension is added with the following command (or, alternatively, using pgAdmin):

CREATE EXTENSION postgis;

Some 3D operations such as extrusion or volume calculation are only available through the PostGIS SFCGAL extension. This extension is optional and only needed if you want to use the additional functionality. The installed PostGIS extension should at least be version 2.2 to be able to use the SFCGAL extension:

CREATE EXTENSION postgis_sfcgal;

Note

Starting from PostGIS v3, all the raster functionality has been moved to a separate extension postgis_raster. Since the 3DCityDB requires the raster functionality, this extension must be installed if PostGIS 3 or a higher version is used.

CREATE EXTENSION postgis_raster;

Step 3 – Edit the CONNECTION_DETAILS[.sh | .bat] script

Go to the 3dcitydb/postgresql/ShellScripts directory, choose the folder corresponding to your operating system and open the file named CONNECTION_DETAILS with a text editor. There are five variables that will be used to connect to the DBMS. If psql is already registered in your system path, you do not have to set the directory for the PGBIN variable. The other parameters should be obvious to PostgreSQL users. Here is an example how the complete CONNECTION_DETAILS can look like under Windows:

set PGBIN=C:\Program Files\PostgreSQL\13\bin  ::Directory containing the psql binary
set PGHOST=localhost                          ::Name of the database server
set PGPORT=5432                               ::Port of the database server
set CITYDB=citydb_v4                          ::Name of the 3DCityDB database to connect to
set PGUSER=citydb_user                        ::Database user to connect with

Step 4 - Execute the CREATE_DB script

As soon as the database credentials are defined, run the CREATE_DB script. Is is located in the same folder as CONNECTION_DETAILS (see also Section 1.3.1).

Step 5 – Specify the coordinate reference system

After executing the CREATE_DB script, the user is prompted for the coordinate reference system (CRS) to be used in the 3D City Database. You have to enter the PostGIS specific SRID (spatial reference ID) of the CRS which – in most cases – resembles the EPSG code of the CRS. There are three prompts in total to define the spatial reference:

  • First, specify the SRID to be used for the geometry columns of the database. Unlike previous version of the 3D City Database there is no default CRS defined.
  • Second, specify the SRID of the height system if no true 3D CRS is used for the data. This can be regarded as metadata and has no effect on the geometry columns in the database. The default value is 0 – which means “not set”.
  • Third, provide the GML compliant uniform resource name (URN) encoding of the CRS. The default value uses the OGC namespace and comprises of the first two user inputs: urn:ogc:def:crs,crs:EPSG::<crs1>[,crs:EPSG::<crs2>].

More information about the SRID and the URN encoding can be found in Section 3.3.

Note

The setup process will terminate immediately if an error occurs during the setup process. Reasons might be:

  • The user executing CREATE_DB script is neither a superuser nor the owner of the specified database (or does not own privileges to create objects in that database);
  • The PostGIS extension has not been installed; or
  • Parts of the 3D City Database do already exist because of a previous setup attempt. Therefore, make sure that the schemas citydb and citydb_pkg do not exist in the database when setting up the 3D City Database.

After a series of log messages reporting the creation the 3DCityDB schema and stored procedures, the chosen reference system is applied to the spatial columns (expect for those that will store data with local coordinate systems). This takes some seconds. The setup process is successfully completed when ‘Done’ is printed to the console.

The following figure exemplifies the user input for the CREATE_DB script.

_images/first_step_CREATE_DB_cli.png

Example user input when executing CREATE_DB for a PostgreSQL database.

Step 5 – Check if the setup is correct

The 3D City Database is stored in a separate PostgreSQL schema called citydb. The stored procedures are written to a separate PostgreSQL schema called citydb_pkg. Usually, different schemas have to be addressed in every query via dot notation, e.g.

SELECT * FROM citydb.building;

Fortunately, this can be avoided when the corresponding schemas are on the database search path. The search path is automatically adapted during the setup. Execute the command

SHOW search_path;

to check if the search path contains the schemas citydb, citydb_pkg and public (required for PostGIS elements).

Note

When using the created 3D City Database as a template database for new databases, the search path information is not transferred and thus has to be set again for the new database, e.g.:

ALTER DATABASE new_citydb_v4 SET search_path TO citydb, citydb_pkg, public;

The search path will be updated upon the next login, not within the same session.

To drop the 3D City Database with all data, execute the DROP_DB script in the same way like CREATE_DB. Simply dropping the schemas ‘citydb’ and ‘citydb_pkg’ in a cascading way will also do the job.

Installation steps on Oracle

Step 1 - Define a user for the 3D City Database

A dedicated database user should be created for your work with the 3D City Database. This user must have the roles CONNECT and RESOURCE assigned and must own the privileges CREATE SEQUENCE and CREATE TABLE.

Note

The privileges CREATE SEQUENCE and CREATE TABLE are required for enabling and disabling spatial indexes. It is not sufficient to inherit these privileges through a role.

Step 2 – Edit the CONNECTION_DETAILS[.sh | .bat] script

Go to the 3dcitydb/oracle/ShellScripts directory, choose the folder corresponding to your operating system and open the file named CONNECTION_DETAILS within a text editor. There are five variables that will be used to connect to the DBMS. If SQL*Plus is already registered in your system path, you do not have to set the directory for the SQLPLUSBIN variable. The other parameters should be obvious to Oracle users. Here is an example how the complete CONNECTION_DETAILS can look like under Windows:

set SQLPLUSBIN= C:\\Oracle\\instantclient_11_2 ::Directory containing the SQL*Plus binary
set HOST=localhost                             ::Name of the database server
set PORT=1521                                  ::Port of the database server
set SID=orcl                                   ::SID of the 3DCityDB database to connect to
set USERNAME=citydb_v4                         ::Database user to connect with

Note

The scripts to grant or revoke read access require SYSDBA privileges. You can specify a SYSDBA user in the CONNECTION_DETAILS script using the additional parameter called SYSDBA_USERNAME.

Step 3 - Execute the CREATE_DB script

As soon as the database credentials are defined run the CREATE_DB script. It is located in the same folder as CONNECTION_DETAILS (see also Section 1.3.1).

Step 4 - Define the coordinate reference system

Like with PostgreSQL, the user is prompted to enter the SRID used for the geometry columns, the SRID of the height system and the GML compliant URN encoding of the coordinate reference system (see Section 3.3 for more information).

Step 5 – Enable or disable versioning

After providing the CRS information, the user is asked whether or not the database should be versioned-enabled. Versioning is realized based on Oracle’s Workspace Manager functionality (see the Oracle documentation for more information). Please enter ‘yes’ or ‘no’. The default value ‘no’ is confirmed by simply pressing Enter. Note that, in general, insert, update, delete and index operations on version-enabled tables take considerably more time than on tables without versioning support.

Step 6 – Choose Spatial or Locator license option

You can set up a 3D City Database instance on an Oracle database with Spatial or Locator support. Since Locator differs from Spatial with respect to the available spatial data types, you need to specify which license option is valid for your Oracle installation. Simply enter ‘L’ for Locator or ‘S’ for Spatial (default value) to make your choice.

Note

Since Locator lacks the GeoRaster data type, the 3D City Database tables for storing raster reliefs (RASTER_RELIEF, GRID_COVERAGE, GRID_COVERAGE_RDT) are not created when choosing Locator.

Note

Several spatial operations and functionalities that are available in Oracle Spatial are not covered by the Locator license even though they might be available from your Oracle installation. It is the responsibility of the database user to observe the Oracle license option. Choosing Locator or Spatial when setting up the 3D City Database does neither affect the license option nor the users’ responsibility.

Step 7 – Check if the setup is correct

After successful completion of the setup procedure, the tables, sequences and packages (that contain stored procedures) should appear in the user schema.

Versioning of the database can also be switched on and off at any time. The corresponding scripts are ENABLE_VERSIONING and DISABLE_VERSIONING. These scripts invoke routines of the Oracle Workspace Manager and will take some time for execution depending on the amount of data stored in the 3D City Database instance.

Last but not least, the schema and stored procedures of the 3D City Database can be dropped with the DROP_DB script, which is executed like CREATE_DB. Similar to CREATE_DB, you need to provide the license option (Locator or Spatial). Note that the script will delete all data stored in the 3D City Database schema. The database user will, however, not be deleted.

Migration from previous releases

Shell scripts for migrating an existing 3DCityDB instance to the latest version are located in the folder ShellScripts/[Windows|Unix]/MIGRATION inside the installation directory of the Importer/Exporter tool.

You will only need the migration scripts for a major version update, for instance, if you want to upgrade a 3DCityDB instance from version 2.x or 3.x to version 4.x. For minor version updates (i.e., from version 4.x to 4.y), separate upgrade scripts are provided (see Section 1.4.4 below). A migration might involve changes to the 3DCityDB tables or objects, so please make sure to backup your data before starting the migration.

Hint

Another safe and simple migration approach is to export the database content from the v2.x/v3.x instance as CityGML with the previous version of the Importer/Exporter and re-import the data into the new 3D City Database version using the new Importer/Exporter shipped with this release. This approach might take more time though, depending on the amount of data stored in the database.

Note

The migration scripts do not handle version-enabled tables under Oracle. Therefore, if you are using Oracle and have enabled versioning, then exporting and re-importing the data is the recommended way to migrate to the new 3DCityDB version.

To start the migration process run the MIGRATE_DB shell script. Make sure that the database credentials entered in the CONNECTION_DETAILS file are correct. The script will first prompt the user for the major version number of the currently installed 3D City Database instance – either `2` or `3`. To identify the actual version of your 3D City Database you can use the Importer/Exporter tool to connect to the 3D City Database instance that you want to upgrade. Starting from v3.0.0 the version string is printed to the console window after the connection has been successfully established as shown below (see also Section 4.3).

_images/first_step_3dcityb_version_info.png

Version information of a 3D City Database.

If the version string does not show up, you are running a v2.x instance. Alternatively, the version information can also be queried using database functions.

For PostgreSQL the command is:

psql> SELECT major_version FROM citydb_pkg.citydb_version();

For Oracle it is:

SQL> select MAJOR_VERSION from table(CITYDB_UTIL.CITYDB_VERSION);

If the function is not known to the system, you are probably running a v2.x instance. For Oracle, migrating from v2 to v4 has some prerequisites which are explained in the next chapter.

Migration from v2 to v4 on PostgreSQL

Step 1 – Run MIGRATE_DB

For PostgreSQL, migrating an existing v2.x of the 3DCityDB simply requires executing the MIGRATE_DB shell script and choosing the value 2 as first user input.

Step 2 – Be sure of using unique texture URIs

Starting from v3 of the 3D City Database, textures that are referenced by more than one geometry are no longer stored redundantly in the SURFACE_DATA table but only once in the TEX_IMAGE table. This optimization can also be applied during the migration process, if you can guarantee that texture URIs are unique and not used for different texture files. Otherwise, some textures would get lost during the migration and remaining images would be referenced to wrong surfaces.

If you can assure that there a no duplicate texture URIs in your database, then trigger the optimization by choosing ‘y’ or ‘yes’ as second input for the migration script. In case you know that textures in the database are named equally (or if you do not know) you can still run the script by entering ‘n’ (which is also the default value). Entries in the TEX_IMAGE column of the SURFACE_DATA table from v2 are then further mapped 1:1 to the TEX_IMAGE table of v4.

Note

A simple unification of texture URIs in advance of the migration will not help to store the textures only once, because same textures with different URIs are regarded as different image files and would all end up in the new TEX_IMAGE table. You would have to compare the binary data itself.

Step 3 – Check if the setup is correct

After a series of log messages reporting the selection of data from the v2.x schema, updates of references and the creation of database objects, the script is finished with the message ‘3DCityDB migration complete!’. If the old database schema is not dropped during the migration (see last step), both versions of the 3D City Database will remain in one database. This is actually a good thing, because you can further compare if everything has been transferred correctly.

Re-run migration if required

If the migration process has been interrupted by the user or by severe software errors, the migration script can be simply executed again (only if the old v2.x schema still exists) without manually dropping already created parts of the v4 schema because the script does it for you.

Step 4 – Drop the deprecated v2.x schema

To remove the deprecated parts of your 3D City Database invoke the DROP_DB_V2 shell script.

Caution

DO NOT execute the DROP_DB script from the ShellScripts/[Windows|Unix] folder. The old and new instances of the 3D City Database are both stored inside the same database (new = citydb schema, old = public schema). DROP_DB drops all database schemas for which it finds a DATABASE_SRS table, so all your data would be lost. So be careful to use DROP_DB_V2 instead!

Migration from v2 to v4 on Oracle

Step 1 – Upgrade an existing installation

The migration to v4.x must be carried out on a version 2.1.0 instance of the 3D City Database. Versions prior to version 2.1.0 must first be upgraded to 2.1.0 since the internal storage of envelopes of city objects changed substantially. Corresponding upgrade scripts are shipped with the v2.1.0 release. Upgrades to 2.1.0 can be carried out from any older version 2.0.0 to 2.0.6. A more detailed description of the upgrade procedure can be found in the version 2.1.0 manual.

Before upgrading your 3D City Database, a database backup is highly recommended to secure all data. The latter can be easily done using the Importer/Exporter tool or by tools provided by Oracle.

Note

Please note that the last step in the upgrade process is a lengthy one. Altering the internal storage of the envelopes of all city objects in a large and/or versioned database may take hours. Depending on their initial state, spatial indexes may be disabled and re-enabled in the process, adding to the duration as a whole. This process MUST NOT be interrupted since it could lead to an inconsistent state. Please be patient and remember that backing up all of your data before starting any database upgrade is the commonly recommended practice.

Step 2 – Creating a new installation

The migration script transfers data from a user schema with the v2.1.0 installation to another user schema that has to contain the 3D City Database schema v4. Install the new version like it is described in Section 1.3 if not done so yet.

Step 3 – Grant select on v2.1.0 schema to v4 schema

The migration process requires that the user with the v4 schema can access the user schema with the v2.1.0 version. Therefore, run the GRANT_ACCESS_V2 shell script (see Section 1.3.1) as the v2 user. When executed the user is requested to type in the schema name for the 3D City Database v4 instance.

Step 4 – Run MIGRATE_DB

Now, start the MIGRATE_DB script located in the same folder like GRANT_ACCESS_V2 as the v4 user. Choose the value 2 as first input and specify the name of the schema with the v2.1.0 instance.

Step 5 – Be sure of using unique texture URIs

Like with the PostgreSQL version, you are requested to guarantee that there are no duplicate texture URIs used for different images, or enter ‘n’ to skip the texture storage optimization. See Step 2 in the description of the PostgreSQL migration for more details.

Step 6 – Choose Spatial or Locator license option

With the last input parameter you specify the database license running on your Oracle server, like you have done when setting up the v4 instance of the 3D City Database. Choose ‘S’ for Spatial (which will additionally migrate raster data) and ‘L’ for Locator.

Step 7 – Check if the setup is correct

The script temporary disables databases indexes and foreign key constraints and creates an additional package with migration procedures (CITYDB_MIGRATE). The package is removed again when the migration progress is completed and the message “DB migration is completed.” is displayed on the console. It is recommended to generate a database report of the new user schema and compare it with a report of the schema that contains the 2.1 instance of the 3D City Database (done with the previous version of the Import/Export tool). Verify that

  • no city objects are missing (do a database report),
  • indexes and foreign keys got activated again,
  • relations between features and attributes are correct, and
  • exports look correct inside a viewer application.

Step 8 – Drop the deprecated v2.x schema

If the migration was successful, the v2.x user simply has to invoke the DROP_DB (of version 2.x) to drop the deprecated schema. Deleting the v2.x user works as well.

Migration from v3 to v4

The migration process from v3 to v4 does not require any user inputs after entering the value `3` in the MIGRATE_DB script (except for choosing the license under Oracle).

Note

Schema changes on existing tables are applied with ALTER TABLE statements which can lock these tables for a longer period if they contain millions of rows.

Upgrade between minor releases

Every minor release of the 3D City Database is shipped with an UPGRADE_DB shell script to update an existing database instance that is already of version 4.x. This script can also be found in the MIGRATION folder.

When running the UPGRADE_DB script, it will try and check whether a minor version update is appropriate or whether a major version update must be performed instead. In the latter case, the script terminates with a corresponding error message. It is always recommended to also manually check the version information of your existing 3DCityDB instance as described above before running the upgrade script.

During the upgrade, check the output messages of the script for errors and warnings. The process should finish the message 3D City Database upgrade complete.

Hint

When using the PostgreSQL version, it is highly recommended to execute the VACUUM ANALYZE command after an upgrade or migration process in order to increase the overall database performance.

Docker Images

Docker is a widely used virtualization technology that makes it possible to pack an application with all its required resources into a standardized unit - the Docker Container. Software encapsulated in this way can run on Linux, Windows, macOS and most cloud services without any further changes or setup process. Docker containers are lightweight compared to traditional virtualization environments that emulate an entire operating system because they contain only the application and all the tools, program libraries, and files it requires.

For instance, Docker enables you to get a 3DCityDB instance up and running in a fews seconds, without having to setup a database server or the 3DCityDB database schema, as shown in Fig. 1.4.

3DCityDB Docker runtime

Setup a 3DCityDB instance using Docker and establish a connection to the ready-to-use 3DCityDB in seconds.

Docker images are available for the following tools of the 3DCityDB software suite:

All images are available from 3DCityDB DockerHub.

The following sections provide quick start code snippets for all 3DCityDB Docker images to get you running in a few seconds. For a more comprehensive documentation please visit the individual chapters of each image.

Note

Replace the line continuation character \ with ^ for Windows systems.

3DCityDB Docker

To run a PostgreSQL/PostGIS 3DCityDB container the only required settings are a database password (POSTGRES_PASSWORD) and the EPSG code of the coordinate reference system (SRID) of the 3DCityDB instance. Use the docker run -p switch to define a port to expose to the host system for database connections.

The detailed documentation for the 3DCityDB Docker image is available here.

docker run -d -p 5432:5432 --name cdb \
  -e POSTGRES_PASSWORD=changeMe! \
  -e SRID=25832 \
3dcitydb/3dcitydb-pg

A container started with the command above will host a 3DCityDB instance configured like this:

CONTAINER NAME    cdb
DB HOST           localhost or 127.0.0.1
DB PORT           5432
DB NAME           postgres
DB USER           postgres
DB PASSWD         changeMe!
DB SRID           25832
DB GMLSRSNAME     urn:ogc:def:crs:EPSG::25832

Importer/Exporter Docker

The 3DCityDB Importer/Exporter Docker image exposes the Command Line Interface (CLI) of the 3DCityDB Importer/Exporter. For all export or import operations a shared folder (docker run -v) to exchange data with the host system is required. It is recommended to run the container as the currently logged in user and group (docker run -u) to ensure files are readable/writeable.

The detailed documentation for the 3DCityDB Importer/Exporter Docker image is available here, the documentation of the CLI is available here.

docker run -i -t --name impexp --rm \
  -u $(id -u):$(id -g) \
  -v /local/data/dir:/data \
3dcitydb/impexp COMMAND

Use the help command to see the CLI documentation and list all available commands:

docker run -i -t --name impexp --rm 3dcitydb/impexp:edge-alpine help

Run help COMMAND to see the CLI documentation for a specific command:

docker run -i -t --name impexp --rm 3dcitydb/impexp:edge-alpine help export

For instance, a simple CityGML export looks like this:

docker run -i -t --name impexp --rm \
  -u $(id -u):$(id -g) \
  -v /local/data/dir:/data \
  3dcitydb/impexp \
    export -H my.citydb.host.de -d postgres -p postgres -u postgres -o out.gml

The exported file will be available on the host system at: /local/data/dir/out.gml.

3D-Web-Map-Client Docker

The 3DCityDB 3D-Web-Map-Client Docker image provides an instance of the 3DCityDB 3D-Web-Map-Client. Use the docker run -p switch to expose a port for connections to the web client.

Currently, the Webclient Docker images are maintained and documented at the TUM-GIS 3D-Web Client Docker repo.

docker run -d --name 3dwebmap-container -p 80:8000 tumgis/3dcitydb-web-map

Web Feature Service (WFS) Docker

The 3DCityDB Web Feature Service (WFS) Docker image exposes the capabilities of the Web Feature Service for dockerized applications and workflows. Using the WFS Docker you can expose the features stored in a 3DCityDB instance through an OGC WFS interface offering a rich set of features like advanced filter capabilities. For a basic configuration just the connection credentials of the 3DCityDB (CITYDB_* variables) have to be specified.

All WFS functionalities are supported by the images.

The detailed documentation for the Docker image is available in Web Feature Service using Docker.

docker run --name wfs [-d] -p 8080:8080 \
    [-e CITYDB_TYPE=PostGIS|Oracle] \
    [-e CITYDB_HOST=the.host.de] \
    [-e CITYDB_PORT=5432] \
    [-e CITYDB_NAME=theDBName] \
    [-e CITYDB_SCHEMA=theCityDBSchemaName] \
    [-e CITYDB_USERNAME=theUsername] \
    [-e CITYDB_PASSWORD=theSecretPass] \
    [-e WFS_CONTEXT_PATH=wfs-context-path] \
    [-e WFS_ADE_EXTENSIONS_PATH=/path/to/ade-extensions/] \
    [-e WFS_CONFIG_FILE=/path/to/config.xml] \
    [-v /my/data/config.xml:/path/to/config.xml] \
  3dcitydb/wfs[:TAG]

The individual components of the 3D City Database are also available as images for the Docker virtualization technology. This makes it possible to install and configure a 3D City Database with a single command line statement in almost any runtime environment. See Section 1.5 for more details.

Overview

The aim of the 3D City Database is to provide a high-performance and scalable datastore for virtual 3D City Models also known as Digital (Geo-)Twins. The database schema implements and is fully compliant with the conceptual data model of the OGC standard CityGML 2.0. This allows you to store, manage, analyze and even visualize your 3D geodata based on an open and interoperable standard rather than to seal it in proprietary data silos.

The 3D City Database and all bundled tools are free and open source software. The database schema is available for the open source database PostgreSQL/PostGIS and the commercial Oracle Spatial/Locator Database solution. Both database systems are highly optimized, de-facto industry standards for storing and querying 3D spatial data. The geometry data types and spatial functions offered by both systems follow common OGC and GIS standards. This way the data stored in the 3D City Database can be easily accessed and consumed by both open source and commercial GIS tools such as QGIS, ESRI ArcGIS or Safe Software’s Feature Manipulation Engine (FME).

This chapter introduces the key aspects and main features of the 3D City Database and provides a brief development history as well as acknowledgements to supporters of the project. Contributions to the 3D City Database project are always welcome!

Introduction

Virtual 3D city and landscape models are provided for an increasing number of cities, regions, states, and even countries. They are created and maintained by public authorities like national and state mapping agencies as well as by cadastre institutions and private companies. The 3D topography of urban and rural areas is essential for both visual exploration and a range of different analyses in, for example, the urban planning, environmental, energy, transportation, and facility management sectors.

3D city models are nowadays used as an integrative information backbone representing the relevant urban entities along with their spatial, semantic, and visual properties. They are often created and maintained with full coverage of entire cities and even countries, i.e. all real world objects of a specific type like buildings, roads, trees, water bodies, and the terrain are explicitly represented. In most cases the 3D city model objects have well-defined identifiers, which are kept stable during the lifetime of the real world objects and their virtual counterparts. Such complete 3D models are a good basis to organize different types of data and sensors within Smart City projects as they build a stable platform for information linking and enrichment.

In order to establish a common understanding and interpretation of the urban objects and to achieve interoperable access and exchange of complete 3D models including the geometric, topologic, visual, and semantic data, the Open Geospatial Consortium (OGC) has issued the CityGML standard [Kolb2009]. CityGML defines a feature catalogue and data model for the most relevant 3D topographic elements like buildings, bridges, tunnels, roads, railways, vegetation, water bodies, etc. The data model is mapped to an XML-based exchange format using OGC’s Geography Markup Language (GML).

The 3D City Database (3DCityDB) is a free and open source package consisting of a database schema and a set of software tools to import, manage, analyse, visualize, and export virtual 3D city models according to the CityGML standard [YNKH2018]. The database schema results from a mapping of the object oriented data model of CityGML 2.0 to the relational structure of a spatially-enhanced relational database management system (SRDBMS). The 3DCityDB supports the open source SRDBMS PostgreSQL (with PostGIS extension) and the commercial SRDBMS Oracle (with Spatial or Locator license options). The 3DCityDB makes use of the specific representation and processing capabilities of the SRDBMS regarding the spatial data elements. It can handle also very large models in multiple levels of details consisting of millions of 3D objects with hundreds of millions of geometries and texture images.

The 3DCityDB is in use in real life production systems in many places around the world and is also being used in a number of research projects. For example, the cities of Berlin, Potsdam, Munich, Frankfurt, Zurich, Rotterdam, Singapore all keep and manage their virtual 3D city models within an instance of the 3DCityDB. The companies Virtual City Systems (VCS) and M.O.S.S., who are also partners in development, use the 3DCityDB at the core of their commercial products and services to create, maintain, visualize, transform, and export virtual 3D city models (see Appendix B, Appendix C, and Appendix D for examples how and where TUM, Virtual City Systems, and M.O.S.S. employ the 3DCityDB in their projects). Furthermore, the state mapping agencies of all 16 states in Germany store and manage the state-wide collected 3D building models in CityGML LOD1 and LOD2 using the 3DCityDB. In 2012 the 3DCityDB and the developer team received the Oracle Spatial Excellence Award, issued by Oracle USA.

Since the 3DCityDB is based on CityGML, interoperable data access from user applications to the database can be achieved in at least two ways:

  1. by using the included high-performance CityGML Importer/Exporter tool or the included basic Web Feature Service 2.0 in order to exchange the data in CityGML format (version 2.0 or 1.0), and
  2. by directly accessing the database tables whose relational structures are fully explained in detail within this document. It is easy to enrich a 3D city model by adding information to the database tables in some user application (using e.g. the database APIs of programming language like C++, Java, Python, or of ETL tools like the Feature Manipulation Engine from Safe Software). The enriched dataset then can be exchanged or archived by exporting the city model to CityGML without information loss. Analogously, the 3DCityDB can be used to import a CityGML dataset and then access and work with the city model by directly accessing the database tables from some application programs or ETL software.

The Importer/Exporter tool also provides functionalities for the direct export of 3D visualization models in KML, COLLADA, and glTF formats. A tiling strategy is supported which allows to visualize even very large 3D city and landscape models in geoinformation systems (GIS) or digital virtual globes like Google Earth or CesiumJS Virtual Globe.

Starting from release 3.3.0, the 3DCityDB software bundle contains the CesiumJS-based 3D viewer called “3DCityDB-Web-Map-Client” which facilitates the interactive visualization and exploration of 3D city models over the internet within web browsers on desktop and mobile computers. The most significant new functionality in release 4.0.0 is the support of CityGML Application Domain Extensions (ADEs). ADEs extend the CityGML data model by domain specific object types, attributes, and relations.

The Importer/Exporter provides a Plugin API to create further importers, exporters, and database administration tools. The software is shipped with the two optional plugins: 1) The “Spreadsheet Generator Plugin” (SPSHG) to export thematic data of 3D objects into tables in CSV and Microsoft Excel format that can be easily published as online spreadsheets (e.g., using Google Docs) and linked with the 3DCityDB-Web-Map-Client, and 2) the “ADE Manager Plugin” to dynamically extend the 3DCityDB core schema with tables and objects for storing and managing CityGML ADEs.

This documentation describes the design and the components of the 3D City Database as well as their usage for the major release 4 which has been developed and implemented by the three partners in development, namely the Chair of Geoinformatics at Technische Universität München, Virtual City Systems, and MOSS.

The development is continuing the previous work carried out at the Institute for Geodesy and Geoinformation Science of the Berlin University of Technology and the Institute for Cartography and Geoinformation of the University of Bonn.

Some figures and texts are cited from the OpenGIS City Geography Markup Language (CityGML) Encoding Standard, Version 2.0.0 [GKNH2012].

Main features of 3DCityDB

CityGML 2.0 and 1.0 compliant database

The CityGML data model defines classes and relations for the most relevant topographic objects in cities and regional models with respect to their geometrical, topological, semantic, and appearance properties. Included are generalization hierarchies between thematic classes, aggregations, relations between objects, and spatial properties. These thematic information go beyond graphic exchange formats and allow to employ virtual 3D city models for sophisticated analysis tasks in different application domains.

The 3D City Database maps the CityGML data model to a relational database schema and supports the management of CityGML data. For the representation of vector and grid-based geometry, the built-in data types provided by the spatially-enhanced relational database management systems PostgreSQL (9.6 or higher) with PostGIS extension (2.3 or higher) and Oracle Spatial/Locator (10g R2 or higher) are used. This way, special solutions are avoided and different geoinformation systems, CAD/BIM systems, and ETL software systems can directly access (read and write) the geometry objects stored in the SRDBMS.

Support for CityGML Application Domain Extensions (ADEs)

Semantic 3D city models are employed for many different applications from diverse domains like energetic, environmental, driving, and traffic simulations, as-built building information modeling (as-built BIM), asset management, and urban information fusion. In order to store and exchange application specific data aligned and integrated with the 3D city objects, the CityGML data model can be extended by new feature types, attributes, and relations using the CityGML ADE mechanism. ADEs are specified as (partial) GML application schemas using the modeling language XML Schema. Starting from release 4.0.0 the 3DCityDB database schema can be dynamically extended by arbitrary ADEs like the Energy ADE, UtilityNetwork ADE, Dynamizer ADE, or national CityGML extensions like IMGeo3D (from The Netherlands).

Since ADEs can define an arbitrary number of new elements with all types and numbers of spatial properties, a transformation method has been developed to automatically derive the relational database schemas for arbitrary ADEs from the ADE XML schema files. Since we intended to follow similar rules in the mapping of the object-oriented ADE models onto relational models as we used for the (manual) mapping of the CityGML data model onto the 3DCityDB core schema, the Chair of Geoinformatics at TUM developed a new transformation method based on graph transformation systems. This method is described in detail in [YaKo2017] and is implemented within the “ADE Manager Plugin” for the Importer/Exporter software tool.

The ADE Manager performs a sophisticated analysis of the XML schema files of an ADE, the automatic derivation of additional relational table structures, and the registration of the ADE within the 3DCityDB. Furthermore, SQL scripts are generated for each ADE for e.g. the deletion of ADE objects and attributes from the database. Please note that in order to support also the import and export of CityGML datasets with ADE contents, a Java library for the specific ADE has to be implemented. This library has to perform the handling of the CityGML ADE XML elements and the reading from and writing into the respective ADE database tables using JDBC and SQL. An example how to develop such a Java library is given for a Test ADE in the 3DCityDB github repository.

Importing and exporting CityGML data

The included Importer/Exporter software tool allows for high performance importing and exporting of CityGML datasets according to CityGML versions 2.0 and 1.0. The tool allows processing of very large datasets (>> 4 GB), even if they include XLinks between CityGML features or XLinks to geometry objects. The multi-threaded programming exploits multiprocessor systems or multikernel CPUs to speed up the processing of complex XML structures, resulting in high performance database access. Objects can be filtered during import or export according to spatial regions (bounding box), their object IDs, feature types, names, and levels of detail. Bounding boxes can be interactively selected using a map window based on OpenStreetMap (OSM).

A tiling strategy is implemented in order to support the export of very large datasets. In case of a very high number of texture images they can be automatically distributed in a configurable number of subdirectories in order to avoid large directories with millions of files which can render a Microsoft Windows operating systems unresponsive. The Importer can also validate CityGML files and can be configured to only import valid features. It considers CityGML ADE contents if the ADEs have been registered in the database and specific Java libraries for reading/writing the ADE contents from/into the ADE database tables is provided (see above). The Importer/Exporter tool can be run in both interactive and batch mode.

Importing and exporting CityJSON data

In addition to the CityGML format, the Importer/Exporter also supports datasets in CityJSON format. CityJSON is a JSON-based encoding for storing 3D city models and, thus, offers an alternative to the GML/XML encoding of CityGML. It implements a subset of the CityGML 2.0 data model. The CityGML compatibility page provides a list of those CityGML 2.0 features that are supported or omitted in CityJSON.

CityJSON is a candidate for becoming an OGC Community Standard.

Export to KML, COLLADA and glTF

The Importer/Exporter tool can also export city models to KML, COLLADA and glTF formats which can directly be viewed and interactively explored in geoinformation systems (GIS) or digital virtual globes like Google Earth or Cesium WebGL Virtual Globe. A tiling strategy is supported where only tiles in the vicinity of the viewer’s location are being loaded facilitating the visualization of even very large 3D city and landscape models. Information balloons for all objects can be configured by the user. The exported models are especially suited to be visualized using the 3DCityDB-Web-Map-Client (see below), an Open Source 3D web viewer that is based on the CesiumJS web globe framework with many functional extensions.

Spreadsheet export

The Spreadsheet Generator Plugin (SPSHG) allows exporting thematic data of 3D objects into tables in CSV and Microsoft Excel format which can be uploaded to a Google Spreadsheet within the Google Document Cloud. For every selected geo-object one row is being exported where the first column always contains the GMLID value of the respective object. The further columns can be selected by the user. This tool can be used to export attribute data from e.g. buildings like the class, function, usage, roof type, address, and further generic attributes that may contain information like the building energy demand, potential solar energy gain, noise level on the facades etc. The spreadsheet rows can be linked to the visualization model generated by the KML/COLLADA/glTF Exporter. This is illustrated in Section 8.2.

Interactive 3D web visualization

The 3DCityDB-Web-Map-Client is a WebGL-based 3D web viewer which extends the Cesium Virtual Globe to support efficient displaying, caching, prefetching, dynamic loading and unloading of arbitrarily large pre-styled 3D visualization models in the form of tiled KML/glTF datasets generated by the KML/COLLADA/glTF Exporter. It provides an intuitive user interface to facilitate rich interaction with 3D visualization models by means of the enhanced functionalities like highlighting the objects of interests on mouseover and mouseclick as well as hiding, showing, and shadowing them. Moreover, the 3DCityDB-Web-Map-Client is able to link the 3D visualization model with an online spreadsheet (Google Fusion Table) in the Google Cloud and allows viewing and querying the thematic data of every city object according to its GMLID. For details see also [YaCK2016] and [ChYK2015].

Web Feature Service (WFS) 2.0

The 3DCityDB comes with an OGC compliant implementation of a basic WFS 2.0 allowing web-based access to the 3D city objects stored in the database. WFS clients can directly connect to this interface and retrieve 3D content in CityGML and CityJSON format for a wide variety of purposes. The WFS supports CityGML ADE contents, if the ADEs have been registered in the database and specific Java libraries for reading/writing the ADE contents from/into the ADE database tables is provided (see above). An implementation of a transactional WFS supporting the additional operations insert, update, replace and delete for data management is commercially available from one of the development partners, see Section 8.3.

Docker support

We now provide Docker images for

  1. a complete 3DCityDB installation pre-installed in a PostGIS
  2. a webserver with an installed 3DCityDB-Web-Map-Client
  3. a 3DCityDB WFS

We also provide a Docker-compose script to launch all three Docker containers in a linked way with just a single command. Details are given in Section 1.5 and in the respective github repositories. Docker is a runtime environment for virtualization. Docker encapsulates individual software applications in so-called containers, which are – in contrast to virtual machines – light-weight and can be deployed, started and stopped very quickly and easily. Using our Docker images a 3DCityDB can be installed by a single command.

Open Source and Platform Independent

The entire software is freely accessible to the interested public. The 3DCityDB is licensed under the Apache License, Version 2.0, which allows including 3DCityDB in commercial systems. You may obtain a copy of the Apache License at http://www.apache.org/licenses/LICENSE-2.0. Both the Importer/Exporter tool and the Web Feature Service are implemented in Java and can be run on different platforms and operating systems.

Features inherited from CityGML

Complex city object modelling
The representation of city objects in the 3D city database ranges from coarse models to geometrically and semantically fine grained structures. The underlying data model is a complete realization of the CityGML data model for the levels of detail (LOD) 0 to 4. For example, buildings can be represented by simple, monolithic objects or can consist of an aggregation of building parts. Extensions of buildings, like balconies and stairs, can be classified thematically and provided with attributes just as single surfaces can be. LOD4 completes a LOD3 model by adding interior structures for 3D objects. For example, LOD4 buildings are composed of rooms, interior doors, stairs, and furniture. This allows among other things to select the floor space of a building, so that it can later be used e.g. to derive SmartBuildings or to form 3D solids by extrusion [DBBF2005]. Buildings can be assigned addresses that are also stored in the 3D city database. Their implementation refers to the OASIS xAL Standard, which maps the address formats of the different countries into a unified XML schema. In order to model whole complexes of buildings, single buildings can be aggregated to form special building groups. The same complex modelling applies to the other CityGML feature types like bridges, tunnels, transportation and vegetation objects, and water bodies.
Complex digital terrain models (DTM)
DTMs may be represented in four different ways in CityGML and therefore also in the 3D city database: regular grids, triangular irregular networks (TINs), 3D mass points and 3D break lines. For every level of detail, a complex DTM consisting of any number of DTM components and DTM types can be defined. Besides, it is possible to combine certain kinds of DTM representations for the same geographic area with each other (e.g. mass points and break lines or grids and break lines). Please note that the Importer/Exporter tool provides functions to read and write TIN, mass point, and break line DTM components, but not for raster based DTMs. GeoRaster data would have to be imported and exported using other tools from e.g. PostgreSQL, Oracle, ESRI, or Safe Software.
Support for five different Levels of Detail (LODs)
Every geo-object as well as the DTM can be represented in five different resolution or fidelity steps (Levels of Detail, LOD). With increasing LOD, objects do not only obtain a more precise and finer geometry, but do also gain a thematic refinement.
Support for appearance data
Different appearance data may be stored for each city object. Appearance relates to any surface-based theme, e.g. infrared radiation or noise pollution, not just visual properties. Consequently, data provided by appearances can be used as input for both presentation and analysis of virtual 3D city models. The database supports feature appearances for an arbitrary number of themes per city model. Each LOD of a feature can have individual appearances. Appearances can represent – among others – textures and georeferenced textures. All texture images can be stored in the database. (cf. [GKSS2005])
Representation of generic and prototypical 3D objects

Generic objects enable the storage of 3D geo-objects that are not explicitly modelled in CityGML yet, for example dams or city walls, or that are available in a proprietary file format only. This way, files from other software systems like architecture or computer graphics programs can be imported directly into the database (without interpretation). However, application systems that would like to use these data must be able to interpret the corresponding file formats after retrieving them back from the 3D City Database.

Prototypical objects are used for memory-efficient management of objects that occur frequently in the city model and that do not differ with respect to geometry and appearance. Examples are elements of street furniture like lanterns, road signs or benches as well as vegetation objects like shrubs, certain tree types etc. Every instance of a prototypical object is represented by a reference to the prototype, a base point and a transformation matrix for scaling, rotating and translating the prototype.

The geometries (and appearances like textures, colors etc.) of generic objects as well as prototypes can be stored either using the geometry datatype of the spatial database management system (PostgreSQL/PostGIS or Oracle Spatial/Locator) or in proprietary file formats. In the latter case a single file may be saved for every object, but the file type (MIME type), the coordinate transformation matrix that is needed to integrate the object into the world coordinate reference system (CRS) and the target CRS have to be specified.

Extendable object attribution
All objects in the 3D City Database can be augmented with an arbitrary number of additional generic attributes. This way, it is possible to add further thematic information as well as further spatial properties to the objects at any time. In combination with the concept of generic 3D objects this provides a highly flexible storage option for object types which are not explicitly defined in the CityGML standard. Every generic attribute consists of a triple of attribute name, data type, and value. Supported data types are: string; integer and floating-point numbers; date; time; binary object (BLOB, e.g. for storing a file); geometry object according to the specific geometry data type of PostGIS and Oracle respectively; simple, composite, or aggregate 3D solids or surfaces. Please note that generic attributes of type BLOB or geometry are not allowed as generic attributes in CityGML (and will, thus, not be exported by the CityGML exporter). However, it may be useful to store binary data associated with the individual city objects, for example, to store derived 3D computer graphics representations.
Free, also recursive grouping of geo-objects
Geo-objects can be grouped arbitrarily. The aggregates can be named and may also be provided with an arbitrary number of generic attributes (see above). Object groups may also contain object groups, which leads to nested aggregations of arbitrary depth. In addition, for every object of an aggregation, its role in the group can be specified explicitly (qualified association).
External references for all geo-objects
All geo-objects can be provided with an arbitrary number of references to corresponding objects in external data sources (i.e. hyperlinks / linked data). For example, in case of building objects this allows to store e.g. the IDs of the corresponding objects in official cadasters, digital landscape models (DLM), or Building Information Models (BIM). Each reference consists of an URI to the external data store or database and the corresponding object ID or URI within that external data store or database.
Flexible 3D geometries
The geometry of most 3D objects can be represented through the combination of solids and surfaces as well as any - also recursive - aggregation of these elements. Each surface may has attached different textures and colors on both its front and back face. It may also comprise information on transparency. Additional geometry types (any geometry type supported by the spatial database PostgreSQL/PostGIS or Oracle Spatial/Locator) can be added to the geo-objects by using generic attributes.

Development history

Version 1 (2003 - 2007)

The development of the 3D City Database was always closely related to the development of the CityGML standard [KoGr2003]. It was started back in 2003 by Dr. Kolbe and Prof. Plümer at the Institute for Cartography and Geoinformation at University of Bonn. In the period from November 2003 to December 2005 the official virtual 3D city model of Berlin, commissioned by The Berlin Senate and Berlin Partner GmbH, was developed within a pilot project funded by the European Union [PGKS2005]. Since then, the model has been playing a central role in the three-dimensional spatial data infrastructure of Berlin and opened up a multitude of applications for the public and private sector alike. As an example the virtual city model is successfully used for presentation of the business location, its urban development combined with application related information to politicians, investors, and the public in order to support civic participation, provide access to decision-making content, assist in policy-formulation, and control implementation processes [DKLS2006].

The 3DCityDB was key in demonstrating the real world usage of CityGML to the Open Geospatial Consortium on the one hand, and the practical usability and versatility of CityGML to the city of Berlin on the other hand. This first development phase was carried out by University of Bonn in collaboration with the company lat/lon GmbH. Oracle Spatial was the only supported SDBMS in 3DCityDB versions 0.2 up to 1.3.

Version 2 (2006 - 2014)

Within the framework Europäische Fonds für regionale Entwicklung (EFRE II) the project Geodatenmanagement in der Berliner Verwaltung – Amtliches 3D-Stadtmodell für Berlin allowed for upgrading the official 3D city model based on the former CityGML specification draft 0.4.0 in the year 2007. The developments were carried out by the Institute for Geodesy und Geoinformation Science (IGG) of the Berlin University of Technology (where Kolbe became full professor for Geoinformation Science in 2006) on behalf of the Berliner Senatsverwaltung für Wirtschaft, Arbeit und Frauen and the Berlin Partner GmbH (former Wirtschaftsförderung Berlin International). The relational database model (3DCityDB versions 1.4 up to 1.8) was implemented and evaluated in cooperation with 3DGeo GmbH (later bought by Autodesk GmbH) in Potsdam. A special database interface for LandXPlorer was provided by 3DGeo / Autodesk. Later on, a first version of the Java based CityGML Importer/Exporter was developed [SNKK2009].

In August 2008, CityGML 1.0.0 became an adopted standard of the Open Geospatial Consortium (OGC). In the follow-up project Digitaler Gestaltplan Potsdam starting in 2010 the 3DCityDB version 2 (cf. [KKNS2009] and [NaSt2008]) was developed which brought support for all CityGML 1.0.0 feature types. The KML/COLLADA exporter was added as well as a ‘Matching’ plugin. This project was carried out by IGG of TU Berlin on behalf of and in collaboration with the company Virtual City Systems (VCS) in Berlin. In 2012, the developer team at TU Berlin received the Oracle Spatial Excellence Award for Education and Research from Oracle USA for our work on the 3DCityDB. Also in 2012, the 3DCityDB was ported to PostgreSQL/PostGIS by Felix Kunde, a master student from the University of Potsdam, who did his master thesis in collaboration with IGG [Kund2013].

In August 2012, CityGML 2.0.0 became an adopted standard of the Open Geospatial Consortium (OGC). In September 2012, Prof. Kolbe moved from IGG, TU Berlin to the Chair of Geoinformatics at Technische Universität München (TUM). The companies Virtual City Systems in Berlin and M.O.S.S. Computer Grafik Systeme GmbH in Taufkirchen (near Munich) have also been using the 3D City Database in their commercial projects for a number of years. In this context, the Chair of Geoinformatics at TUM and the companies Virtual City Systems and M.O.S.S. signed an official collaboration agreement on the joint further development of the 3DCityDB and its tools.

Version 3 (2013 - 2018)

The work on the new major release version 3.0.0 bringing support for CityGML 2.0 began in 2013 when Dr. Nagel finished his PhD and joined the company VCS. In version 3.3.0 the new 3D web client was being added. The webclient was developed by Zhihang Yao with contributions from Kanishk Chaturvedi and Son Nguyen. In 2015 Zhihang Yao and Kanishk Chaturvedi were awarded the first price in the ‘Best Students Contribution’ of the ‘Web3D city modeling competition’ under the annual ACM SIGGRAPH Web3D Conference for the 3DCityDB-Web-Map-Client.

Version 4 (since 2015)

The work on version 4 – especially the support of CityGML ADEs – began in 2015 in the course of the PhD work of Zhihang Yao. One part of his PhD thesis is focusing on the model transformation of CityGML ADEs onto spatial relational databases using pattern matching and graph transformation rules. Support of CityGML ADEs in the Importer/Exporter required a substantial rewriting of the citygml4j Java library, the Importer/Exporter and WFS source code performed by Dr. Nagel starting from 2016. Felix Kunde worked, among others, on performance improvements and restructuring of the PL/(pg)SQL scripts. Son Nguyen added support for mobile devices in the 3DCityDB-Web-Map-Client in 2017. Docker support was added by Bruno Willenborg in 2018. Starting from 2017 all partners worked on updating diverse functionalities, scripts, documentation, and on testing.

Version 5 (under development)

The next major version 5 of the 3DCityDB is intended to bring support for CityGML 3.0. CityGML 3.0 itself is a major update of the CityGML standard with many new features and capabilities, and is currently in the process for adoption as OGC standard.

Support for CityGML 3.0 will require substantial rework of the 3DCityDB database schema, scripts and all tools. We are planning to kick-off the development work in Q2 2021. Stay tuned on our GitHub page at https://github.com/3dcitydb for early results and prototypes. We are looking for feedback, discussions, and contributions from the 3DCityDB community.

Acknowledgements

The 3D City Database project team is grateful and appreciative for the financial assistance and support we received from partners that contributed to the development of the 3D City Database.

Government Technology Agency of Singapore

The Government Technology Agency of Singapore (GovTech Singapore) has been developing a 3D city standard for Singapore based on CityGML, to establish a common 3D representation of the city-state. GovTech wanted to extend the representation to include other city features through the ADE approach, and had worked with virtualcitySYSTEMS GmbH to start the development of the ADE support on 3DCityDB. The intent is to open source the 3DCityDB ADE support to the international community, so as to encourage wider adoption and implementation of the CityGML standard and ADEs.

CADFEM International GmbH

Founded in 1985, CADFEM is one of the pioneers of numerical simulation based on the Finite Element Method and one of the largest European suppliers of Computer-Aided Engineering. Through the Leonard Obermeyer Center of the Technical University Munich, CADFEM supports the research on digital methods for the design, creation and maintenance of the built environment and the work on the 3D City Database. Bridging the gap between simulation systems and 3D GIS / BIM is a key requirement for enabling multi-physics Urban Simulations and for building Digital Twins of the urban space. The CityGML ADE mechanism supports this in two ways: 1) city features can be enriched with data that is relevant for simulations, and 2) simulation results can be brought back into the city model, turning it into a dynamic knowledge base. CADFEM is supporting the 3D City Database project to leverage the adoption and usage of CityGML ADEs in the field of Urban Simulations.

Climate-KIC of the EIT

Climate-KIC is a so-called ‘Knowledge and Innovation Community’ about Climate Change and Mitigation. It is one of three Knowledge and Innovation Communities (KICs) created in 2010 by the European Institute of Innovation and Technology (EIT). The EIT is an EU body whose mission is to create sustainable growth. Most 3DCityDB developments at TU Munich were done in the context of the projects Energy Atlas Berlin, Modeling City Systems (MCS), Smart Sustainable Districts (SSD), and Smart District Data Infrastructure (SDDI), all financially supported by Climate-KIC.

License information

apache_logoThe 3D City Database and all bundled tools are free and open source software under the Apache License, Version 2.0. See the LICENSE.txt file shipped with the software for more details. For a copy of the Apache License, Version 2.0, please visit http://www.apache.org/licenses/.

You can use the 3D City Database in both personal and commercial products in accordance with the license without fees or usage restrictions.

Attention

You must observe the license and use terms of the spatial database systems you are running the 3D City Database on (PostgreSQL/PostGIS or Oracle).

The source code of the 3D City Database is hosted on GitHub at https://github.com/3dcitydb. Contributions are very welcome!

3D City Database

Note

This is the documentation of the 3D City Database version 4.2.

This chapter gives an in-depth presentation and explanation of the relational schema of the 3D City Database. In Section 3.1, it first discusses the database design along the UML data model of CityGML and its mapping and adaptation to a platform-independent conceptual model for the 3D City Database. This database design is also realized as UML diagrams and forms the basis for the derivation of the relational database schema for a specific database system. The resulting relational schema is then discussed in Section 3.2 together with the rules and conventions applied in the mapping process. The relational schema itself is illustrated using entity-relationship diagrams.

The remaining sections of this chapter are dedicated to different aspects of the database-side implementation and work with the 3D City Database.

UML database design

CityGML is a common information model for 3D urban objects and provides a comprehensive and extensible representation of the objects. It is explained in detail in the CityGML specification [GKNH2012], [GKCN2008] and [Kolb2009]. Most thematic classes are (transitively) derived from the basic classes Feature and FeatureCollection, the basic notions defined in ISO 19109 and GML3 for the representation of features and their aggregations. Features contain spatial as well as non-spatial attributes, which are mapped to GML3 feature properties with corresponding data types. Geometric properties are represented as associations to the geometry classes described in Section 3.1.3.1. The thematic model also comprises different types of interrelationships between Feature classes like aggregations, generalizations, and associations.

The aim of the explicit modelling is to reach a high degree of semantic interoperability between different applications. By specifying the thematic concepts and their semantics along with their mapping to UML and GML3, different applications can rely on a well-defined set of Feature types, attributes, and data types with a standardised meaning or interpretation. In order to allow also for the exchange of objects and/or attributes that are not explicitly modelled in CityGML, the concepts of GenericCityObjects and GenericAttributes have been introduced.

This chapter discusses the mapping of the CityGML data model to a general database design for the 3D City Database on a conceptual level using UML diagrams. The following pages cite several parts of the CityGML specification which are necessary for a better understanding. Main focus is put on explaining the simplifications and customizations of the CityGML data model and the resulting differences to the 3D City Database design. Design decisions in the model are explicitly visualised within the UML diagrams.

Note

For intuitive understanding, classes that are merged to a single table in the relational schema are shown as orange blocks in the UML diagrams. n:m relations between UML classes that require an additional table are represented as green blocks.

CityGML 2.0 simplifications

The analysis of previous versions of the 3D City Database and of existing real-world CityGML models has shown that a less complex database schema is already sufficient to be able to store and manage CityGML data without loss of information. Using a simplified schema makes it also easier to retrieve data or to build software and tools against the schema. Moreover, it helps to improve performance because, for instance, less joins are needed in database queries. Therefore, the first task in deriving a database design from the CityGML data model was to identify and implement possible simplifications.

General simplifications that have been applied to multiple and different feature types and elements of CityGML are listed below. A specific discussion of each CityGML module is provided in the following sections.

Multiplicities of attributes
Attributes with a variable amount of occurrences (*) are substituted by a data type enabling the storage of arbitrary values (e.g. data type String with a predefined separator) or by an array with a predefined amount of elements representing the number of objects that participate in the association. This means that object attributes can be stored in a single column.
Cardinalities and types of relationships
n:m relations require an additional table in the database. This table consists of the primary keys of both element tables which forms a composite primary key. If the relation can be restricted to a 1:n or n:1 relationship the additional table can be avoided. Therefore, all n:m relations in CityGML were checked for a more restrictive definition. This results in simplified cardinalities and relations.
Simplified treatment of recursions
Some recursive relations are used in the CityGML data model. Recursive database queries may cause high cost, especially if the amount of recursive steps is unknown. In order to guarantee good performance, implementation of recursive associations receive two additional columns which contain the ID of the parent and of the root element. For example, if all building parts related to a specific building are queried, only those tuples containing the ID of the building as root element have to be selected. Thus, typical queries concerning object geometry remain high-performance.
Data type adaptation
Data types specified in CityGML were substituted by data types which allow an efficient representation in the database. Strings, for example, are used to represent code types and number vectors; GML geometry types were changed to the database geometry data type. Matrices are stored each one as String data type, with values listed in a row-major sequence separated by spaces.
Project specific classes and class attributes
The 3D city database may contain some classes for representation of project specific metadata, version control and attributes for representation of additional project specific information. Since this information is represented in the CityGML specification differently or even not at all, appropriate classes and class attributes are added or respectively adopted.
Simplified design of GML geometry classes
Spatial properties of features are represented in CityGML using GML3’s geometry model, which is based on the ISO 19107 standard Spatial Schema [Herr2001] and represents 3D geometry according to the well-known Boundary Representation (B-Rep, cf. [FVFH1995]). Actually only a subset of the GML3 geometry package is used in CityGML. Moreover, for 2D and 3D surface-based geometry types a simpler but equally powerful model is used: These geometries are stored as polygons, which are aggregated to MultiSurfaces, CompositeSurfaces, TriangulatedSurfaces, Solids, MultiSolids, as well as CompositeSolids (see Section 3.1.3.1 for more details).

Core model

The base class of all thematic classes within CityGML’s data model is the abstract class _CityObject. _CityObject provides a creation and a termination date for the management of histories of features as well as generic attributes and external references to corresponding objects in other data sets. _CityObject is a subclass of the GML class Feature, thus it may inherit multiple names from Feature, which may be optionally qualified by a codeSpace. This enables the differentiation between, for example, an official name from a popular name or names in different languages. The generalisation property generalizesTo of _CityObject may be used to relate features, which represent the same real-world object in different LoD, i.e. a feature and its generalized counterpart(s). The direction of this relation is from the feature to the corresponding generalised feature.

Features of _CityObject and its specialized subclasses may be aggregated to a CityModel, which is a feature collection with optional metadata. Generally, each feature has the attributes class, function, and usage, unless it is stated otherwise. The class attribute can occur only once, while the attributes usage and function can be used multiple times. The class attribute describes the classification of the objects, e.g. road, track, railway, or square. The attribute function contains the purpose of the object, like national highway or county road, while the attribute usage defines whether an object is e.g. navigable or usable for pedestrians. The attributes class, function and usage are specified as gml:CodeType. The values of these properties can be enumerated in code lists. Furthermore, for each feature the geographical extent can be defined using the Envelope element. Minimum and maximum coordinate values have to be assigned to opposite corners of the feature’s bounding box.

_images/citydb_core_model_and_toplevel_classes.png

Core Model and thematic top level classes

The subclasses of _CityObject comprise the different thematic fields of a city model, in the following covered by separate thematic models: building model (_AbstractBuilding), tunnel model (_AbstractTunnel), bridge model (_AbstractBridge), city furniture model (CiyFurniture), digital terrain model (ReliefFeature), land use model (LandUse), transportation model (TransportationObject), vegetation model (_VegetationObject), water bodies model (_WaterObject) and generic city object model (GenericCityObject). The latter one allows for the modelling of features, which are not explicitly covered by one of the other models. The separation into these models strongly correlates with CityGML’s extension modules, each defining a respective part of a virtual 3D city model.

3D objects are often derived from or have relations to objects in other databases or data sets. For example, a 3D building model may have been constructed from a two-dimensional footprint in a cadastre data set. The reference of a 3D object to its corresponding object in an external data set is essential, if an update must be propagated or if additional data is required (like the name and address of a building’s owner in a cadastral information system). In order to supply such information, each _CityObject may have External References to corresponding objects in external data sets. Such a reference denotes the external information system and the unique identifier of the object in this system.

CityObjectGroups aggregate CityObjects and furthermore are defined as special CityObjects. This implies that a group may become a member of another group realizing a recursive aggregation schema. Since CityObjectGroup is a feature, it has the optional attributes class, function and usage. The class attribute allows a group classification with respect to the stated function and may occur only once. The function attribute is intended to express the main purpose of a group, possibly to which thematic area it belongs (e.g. site, building, transportation, architecture, unknown etc.). The attribute usage can be used, if the object’s usage differs from its function. The attributes class, function and usage are specified as gml:CodeType. The values of these properties can be enumerated in code lists.

Each member of a group may be qualified by a role name, reflecting the role each CityObject plays in the context of the group. Furthermore, a CityObjectGroup can optionally be assigned an arbitrary geometry object. This may be used to represent a generalised geometry generated from the member’s geometries. The parent association linking a CityObjectGroup to a CityObject allows for the modelling of generic hierarchical groupings. This concept is used, for example, to represent storeys in buildings. See Fig. 3.1 for the simplified UML diagram.

Geometry model

Spatial properties of features are represented in CityGML using GML3’s geometry model, which is based on the ISO 19107 standard Spatial Schema [Herr2001] and represents 3D geometry according to the well-known Boundary Representation (B-Rep, cf. [FVFH1995]). Actually only a subset of the GML3 geometry package is used in CityGML. In addition, GML3’s explicit Boundary Representation is extended by a implicit geometry representation, which allows to define and reuse template geometries.

Geometric-topological model

The geometry model of CityGML consists of primitives, which may be combined to form complexes, composite geometries or aggregates. A zero-dimensional object is modelled as a Point, a one-dimensional as a _Curve. A curve is restricted to be a straight line, thus only the GML3 class LineString is used.

Combined geometries can be aggregates, complexes or composites of primitives (see illustration in Fig. 3.2). In an Aggregate, the spatial relationship between components is not restricted. They may be disjoint, overlapping, touching, or disconnected. GML3 provides a special aggregate for each dimension, a MultiPoint, a MultiCurve, a MultiSurface or a MultiSolid. In contrast to aggregates, a Complex is topologically structured: its parts must be disjoint, must not overlap and are allowed to touch, at most, at their boundaries or share parts of their boundaries. A Composite is a special complex provided by GML3. It can only contain elements of the same dimension. Its elements must be disjoint as well, but they must be topologically connected along their boundaries. A Composite can be a CompositeSolid, a CompositeSurface, or CompositeCurve and must be homeomorphic to the corresponding primitive geometry.

_images/citydb_aggregated_geometry_types.png

Different types of aggregated geometries [GKNH2012]

The modelling of two-dimensional and three-dimensional geometry types in the 3DCityDB is handled in a simplified way. All surface-based geometries are stored as polygons, which are aggregated to MultiSurfaces, CompositeSurfaces, TriangulatedSurfaces, Solids, MultiSolids, as well as CompositeSolids accordingly. This simplification substitutes the more complex representations used for the GML geometry classes in the grey block of Fig. 3.3 with the elements in the orange block. This way, mapping surface-based geometries to the relational schema can be done with a single table (called SURFACE_GEOMETRY, see Section 3.2.3 for a detailed description).

_images/citydb_geometrical-topographical_model.png

Geometric-topological model. For simplification the geometry classes in the grey block are substituted by the construct in the orange block

Another reason for the explicit surface-based storage is that each surface can be assigned multiple appearances (e.g., textures) in CityGML and, thus, each appearance must be explicitly linkable to the corresponding polygons in the database (see also Section 3.1.4).

Implicit geometry

The concept of implicit geometries is an enhancement of the GML3 geometry model. An implicit geometry is a geometric object, where the shape is stored only once as a prototypical geometry. Examples are trees or other vegetation objects, traffic lights or traffic signs. This template geometry object is re-used or referenced many times, wherever the corresponding feature occurs in the 3D city model. Each occurrence is represented by a link to the prototypic shape geometry (in a local Cartesian coordinate system), by a transformation matrix that is multiplied with each 3D coordinate of the prototype, and by an anchor point denoting the base point of the object in the world coordinate reference system. The concept of implicit geometries is similar to the well-known concept of primitive instancing used for the representation of scene graphs in the field of computer graphics [FVFH1995].

_images/citydb_implicit_geometry_model.png

Implicit geometry model

Implicit geometries may be applied to features from different thematic fields in order to geometrically represent the features within a specific level of detail (LOD). Thus, each CityGML thematic extension module (like Building, Bridge, and Tunnel etc.) may define spatial properties providing implicit geometries for its thematic classes.

The shape of an implicit geometry can be represented in an external file with a proprietary format, e.g. a VRML file, a DXF file, or a 3D Studio MAX file. The reference to the implicit geometry can be specified by an URI pointing to a local or remote file, or even to an appropriate web service. Alternatively, a GML3 geometry object can define the shape. This has the advantage that it can be stored or exchanged inline within the CityGML dataset. Typically, the shape of the geometry is defined in a local coordinate system where the origin lies within or near to the object’s extent. If the shape is referenced by an URI, also the MIME type of the denoted object has to be specified (e.g. “model/vrml” for VRML models or “model/x3d+xml” for X3D models).

The implicit representation of 3D object geometry has some advantages compared to the explicit modelling, which represents the objects using absolute world coordinates. It is more space-efficient, and thus more extensive scenes can be stored or handled by a system. The visualization is accelerated since 3D graphics hardware supports the scene graph concept. Furthermore, the usage of different shape versions of objects is facilitated, e.g. different seasons, since only the library objects have to be exchanged.

Appearance model

Information about a surface’s appearance, i.e. observable properties of the surface, is considered an integral part of virtual 3D city models in addition to semantics and geometry. Appearance relates to any surface-based theme, e.g. infrared radiation or noise pollution, not just visual properties and can be represented by – among others – textures and georeferenced textures. Appearances are supported for an arbitrary number of themes per city model. Each LoD of a feature can have individual appearances. Each city object or city model respectively may store its own appearance data. Therefore, the base CityGML classes _CityObject and CityModel contain a relation appearance and appearanceMember respectively.

_images/citydb_appearance_model.png

Appearance model

Themes are represented by an identifier only. The appearance of a city model for a given theme is defined by a set of objects of class Appearance, referencing this theme through the attribute theme. All appearance objects belonging to the same theme compose a virtual group. An Appearance object collects surface data relevant for a specific theme through the relation surfaceDataMember. Surface data is represented by objects of the abstract class _SurfaceData. Its only attribute is the Boolean flag isFront, which determines the side (front and back face of the surface) a surface data object applies to.

A constant surface property is modelled as material. A surface property, which depends on the location within the surface, is modelled as texture. Each surface object can have both a material and a texture per theme and side. This allows for providing both a constant approximation and a complex measurement of a surface’s property simultaneously. If a surface object is to receive multiple textures or materials, each texture or material requires a separate theme. The mixing of themes or their usage is not explicitly defined but left to the application.

Materials define light reflection properties being constant for a whole surface object. The definition of the class X3DMaterial is adopted from the X3D and COLLADA specification (cf. X3D, COLLADA specification):

  • diffuseColor defines the colour of diffusely reflected light.
  • specularColor defines the colour of a directed reflection.
  • emissiveColor is the colour of light generated by the surface.

All colours use RGB values with red, green, and blue chanels, each defined as value between 0 and 1. Transparency is stored separately using the transparency element where 0 stands for fully opaque and 1 for fully transparent. ambientIntensity specifies the minimum percentage of diffuseColor that is visible regardless of light sources. shininess controls the sharpness of the specular highlight. 0 produces a soft glow while 1 results in a sharp highlight. isSmooth gives a hint for normal interpolation. If this Boolean flag is set to true, vertex normals should be used for shading (Gouraud shading). Otherwise, normals should be constant for a surface patch (flat shading). Target surfaces are specified using target elements. Each element contains the URI of one target surface geometry object.

The base class for textures is _AbstractTexture. Here, textures are always raster-based 2D textures. The raster image is specified by imageURI using a URI and may contain an arbitrary image data resource, even a preformatted request for a web service. The image data format can be defined using standard MIME types in the mimeType element. Textures can be qualified by the attribute textureType, differentiating between textures, which are specific for a certain object (specific) and prototypic textures being typical for that object surface (typical). Textures may also be classified as unknown. The specification of texture wrapping is adopted from the COLLADA standard. Possible values of the attribute wrapMode are none, wrap, mirror, clamp and border.

_AbstractTexture is further specialised according to the texture parameterisation, i.e. the mapping function from a location on the surface to a location in the texture image. Texture parameterisation uses the notion of texture space, where the texture image always occupies of the region [0,1]² regardless of the actual image size or aspect ratio. The lower left image corner is located at the origin. To receive textures, the mapping function must be known for each surface object.

The class GeoreferencedTexture describes a texture that uses a planimetric projection. Such a texture has a unique mapping function which is usually provided with the image file (e.g. georeferenced TIFF) or as a separate ESRI world file. The search order for an external georeference is determined by the Boolean flag preferWorldFile. Alternatively, inline specification of a georeference similar to a world file is possible. This internal georeference specification always takes precedence over any external georeference. referencePoint defines the location of the centre of the upper left image pixel in world space and corresponds to values 5 and 6 in an ESRI world file. Since GeoreferencedTexture uses a planimetric projection, referencePoint is two-dimensional and the orientation defines the rotation and scaling of the image in form of a 2x2 matrix (a list of 4 doubles in row-major order corresponding to values 1, 3, 2, and 4 in an ESRI world file). The CRS of this transformation is identical to the referencePoint’s CRS. If neither an internal nor an external georeference is given, the GeoreferencedTexture is invalid. Target surfaces are specified using target elements. Each element contains the URI of one target surface geometry object. All target surface objects share the mapping function defined by the georeference.

The class ParameterizedTexture describes a texture with a target-dependent mapping function. Each target surface geometry object is specified as URI in the uri attribute of a separate target element. The mapping is defined by associated classes of _TextureParameterization:

  • TexCoordList for the concept of texture coordinates, defining an explicit mapping of a surface’s boundary points to points in texture space, and
  • TexCoordGen when using a common 3x4 transformation matrix from world space to texture space, specified by the attribute worldToTexture.

Building model

Buildings can be represented in five levels of detail (LoD0 to LoD4). The building model allows the representation of simple buildings that consist of only one component, as well as the representation of complex relations between parts of a building, e.g. a building consisting of three parts – a main house, a garage and an extension. The parts can again consist of parts etc. The subclasses Building and BuildingPart of _AbstractBuilding enable these modelling options.

_images/citydb_example_building_parts.png

Example of buildings consisting of one and two building parts [GKCN2008]

In the case of a simple, one-piece house there is only one Building which inherits all attributes and relations from _AbstractBuilding (cf. Fig. 3.6). However, such a Building can also comprise BuildingParts which likewise inherit all properties from _AbstractBuilding: the building’s class, function (e.g. residential, public, or industry), usage, year of construction, year of demolition, roof type, measured height, and the number and individual heights of all its storeys above and below ground (cf. Fig. 3.7).

_images/citydb_building_model.png

UML diagram of Building model

Furthermore, Addresses can be assigned to Buildings or BuildingParts. In particular, BuildingParts may again comprise BuildingParts as components, because the composition relation is inherited. This way a tree-like hierarchy can be created whose root object is a Building and whose non-root nodes are BuildingParts. The attribute values are generally provided for the leaves in the hierarchy level, because basically every part can have its own, for instance, construction year and function. However, the function can also be defined in the root of the hierarchy and therefore span the whole building. The individual BuildingParts within a Building must not penetrate each other and must form a coherent object.

The geometric representation of an _AbstractBuilding is successively refined from LOD0 to LOD4. Therefore, a single building can have multiple spatial representations in different levels of detail at the same time using Solid, MultiSurface, and/or MultiCurve (cf. Fig. 3.7) geometries.

In LoD0, the building can be represented by horizontal, 3-dimentional surfaces describing the footprint and the roofprint. In LoD1, a building model consists of a geometric representation of the building volume. Optionally, a MultiCurve representing the TerrainIntersectionCurve can be specified. This geometric representation is refined in LoD2 by additional MultiSurface and MultiCurve geometries, used for modelling architectural details like a roof overhang, columns, or antennas. In LoD2 and higher LoDs the outer facade of a building can also be differentiated semantically by the classes _BoundarySurface and BuildingInstallation. A _BoundarySurface is a part of the building’s exterior shell with a special function like wall (WallSurface), roof (RoofSurface), ground plate (GroundSurface), or closing surface (ClosureSurface) as shown in Fig. 3.8. Closure surfaces can be used to virtually seal open buildings as for example hangars, allowing e.g. volume calculation. The BuildingInstallation class is used for building elements like balconies, chimneys, dormers, or outer stairs, strongly affecting the outer appearance of a building. A BuildingInstallation is used for the representation of chimneys, stairs, balconies etc. and optionally has the attributes class, function, and usage.

_images/citydb_building_boundary_surface.png

Boundary surfaces

In LoD3, the openings in _BoundarySurface objects (doors and windows) can be represented as thematic objects. In LoD4, the highest level of resolution, also the interior of a building, composed of several rooms, is represented in the building model by the class Room. The aggregation of rooms according to arbitrary, user-defined criteria (e.g. for defining the rooms corresponding to a certain storey) is achieved by employing the general grouping concept provided by CityGML. Interior installations of a building, i.e. objects within a building which (in contrast to furniture) cannot be moved, are represented by the class IntBuildingInstallation. If an installation is attached to a specific room (e.g. radiators or lamps), they are associated with the Room class, otherwise (e.g. in case of rafters or pipes) with _AbstractBuilding. A Room may have the attributes class, function, and usage referenced to external code lists. The class attribute allows a classification of rooms with respect to the stated function, e.g. commercial or private rooms, and occurs only once. The function attribute is intended to express the main purpose of the room, e.g. living room, kitchen. The attribute usage can be used if the object’s usage differs from its function. Both attributes can occur multiple times.

The visible surface of a room is represented geometrically as a Solid or MultiSurface. Semantically, the surface can be structured into specialised _BoundarySurfaces, representing floor (FloorSurface), ceiling (CeilingSurface), and interior walls (InteriorWallSurface) (cf. Fig. 3.8). Room furniture, like tables and chairs, can be represented in the CityGML building model with the class BuildingFurniture. A BuildingFurniture may have the attributes class, function, and usage.

Bridge model

The bridge model was developed in analogy to the building model (cf. Section 3.1.5) with regard to structure and attributes [GKCN2008]. The bridge model allows for the representation of the thematic, spatial and visual aspects of bridges and bridge parts in four levels of detail, LOD 1 – 4. A (movable or unmovable) bridge can consist of multiple BridgeParts. Like Bridge, BridgePart is a subclass of _AbstractBridge and hence, has the same attributes and relations. The relation consistOfBridgePart represents the aggregation hierarchy between a Bridge (or a BridgePart) and it’s BridgeParts. By this means, an aggregation hierarchy of arbitrary depth can be modelled. The semantic attributes of an _AbstractBridge are class, function, usage and is_movable. The attribute class is used to classify bridges, e.g. to distinguish different construction types (cf. Fig. 3.9). The attribute function allows representing the utilization of the bridge independently of the construction. Possible values may be railway bridge, roadway bridge, pedestrian bridge, aqueduct, etc. The option to denote a usage which is divergent to one of the primary functions of the bridge (function) is given by the attribute usage. Each Bridge or BridgePart feature may be assigned zero or more addresses using the address property.

_images/citydb_example_bridge_parts.png

Example of bridge consisting of bridge parts

The spatial properties are defined by a solid for each of the four LODs (relations lod1Solid to lod4Solid). In analogy to the building model, the semantical as well as the geometrical richness increases from LOD1 (blocks model) to LOD3 (architectural model). Interior structures like rooms are dedicated to LOD4. To cover the case of bridge models where the topology does not satisfy the properties of a solid (essentially water tightness), a multi-surface representation is allowed (lod1MultiSurface to lod4MultiSurface). The line where the bridge touches the terrain surface is represented by a terrain intersection curve, which is provided for each LOD (relations lod1TerrainIntersection to lod4TerrainIntersection). In addition to the solid representation of a bridge, linear characteristics like ropes or antennas can be specified geometrically by the lod1MultiCurve to lod4MultiCurve relations.

The thematic boundary surfaces of a bridge are defined in analogy to the building module. _BoundarySurface is the abstract base class for several thematic classes, structuring the exterior shell of a bridge as well as the visible surfaces of rooms, bridge construction elements and both outer and interior bridge installations. From _BoundarySurface, the thematic classes RoofSurface, WallSurface, GroundSurface, OuterCeilingSurface, OuterFloorSurface, ClosureSurface, FloorSurface, InteriorWallSurface, and CeilingSurface are derived.

_images/citydb_bridge_boundary_surface.png

Different BoundarySurfaces of a bridge

Bridge elements which do not have the size, significance or meaning of a BridgePart can be modelled either as BridgeConstructionElement or as BridgeInstallation. Elements which are essential from a structural point of view are modelled as BridgeConstructionElement, for example structural elements like pylons, anchorages etc. (cf. Fig. 3.9 and Fig. 3.11). A general classification as well as the intended and actual function of the construction element are represented by the attributes class, function, and usage. The visible surfaces of a bridge construction element can be semantically classified using the concept of boundary surfaces representing floor (FloorSurface), ceiling (CeilingSurface), and interior walls (InteriorWallSurface) (cf. Fig. 3.10). Whereas a BridgeConstructionElement has structural relevance, a BridgeInstallation represents an element of the bridge which can be eliminated without collapsing of the bridge (e.g. stairway, antenna, and railing) (cf. Fig. 3.11). BridgeInstallations occur in LOD 2 to 4. The class BridgeInstallation contains the semantic attributes class, function and usage. The attribute class gives a classification of installations of a bridge. With the attributes function and usage, nominal and real functions of the bridge installation can be described.

_images/citydb_example_bridge_construction_element.png

Example of bridge consisting of BridgeConstructionElement and BridgeInstallation

In LOD3 and LOD4, a _BoundarySurface may contain _Openings like doors and windows. The classes BridgeRoom, IntBridgeInstallation and BridgeFurniture allow for the representation of the bridge interior. They are designed in analogy to the classes Room, IntBuildingInstallation and BuildingFurniture of the building module and share the same meaning. The bridge interior can only be modelled in LOD4.

_images/citydb_bridge_model.png

UML diagram of bridge model

CityFurniture model

City furniture objects are immovable objects like lanterns, traffic lights, traffic signs, flower buckets, advertising columns, benches, delimitation stakes, or bus stops. The class CityFurniture may have the attributes class, function and usage (cf. UML-diagram in Fig. 3.13). Their possible values are explained in detail in the CityGML specification. The class attribute allows an object classification like traffic light, traffic sign, delimitation stake, or garbage can, and can occur only once. The function attribute describes, to which thematic area the city furniture object belongs to (e.g. transportation, traffic regulation, architecture etc.), and can occur multiple times. The attribute usage denotes the real purpose of the city object, and can occur multiple times as well.

_images/citydb_cityfurniture_model.png

City furniture model

Since CityFurniture is a subclass of CityObject and hence is a feature, it inherits the attribute gml:name. As with any CityObject, CityFurniture objects may be assigned ExternalReferences and GenericAttributes. For ExternalReferences city furniture objects can have links to external thematic databases. Thereby, semantical information of the objects, which cannot be modelled in CityGML, can be transmitted and used in the 3D city model for further processing, for example information from systems of power lines or pipelines, traffic sign cadastre, or water resources for disaster management.

City furniture objects can be represented in city models with their specific geometry, but in most cases the same kind of object has an identical geometry. The geometry of CityFurniture objects in LoD 1-4 may be represented by an explicit geometry (lodXGeometry where X is between 1 and 4) or an ImplicitGeometry object (lodXImplicitRepresentation with X between 1 and 4). In the concept of ImplicitGeometry the geometry of a prototype city furniture object is stored only once in a local coordinate system and referenced by a number of features. Spatial information of city furniture objects can be taken from city maps or from public and private external information systems. In order to specify the exact intersection of the DTM with the 3D geometry of a city furniture object, the latter can have a TerrainIntersectionCurve (TIC) for each LoD. This allows for ensuring a smooth transition between the DTM and the city furniture object.

Generics model

The concept of generic objects and attributes has been introduced in CityGML to facilitate the storage and exchange of 3D objects, which are not covered by explicitly modelled classes within CityGML or which requires additional attributes. These generic extensions are realised by the class GenericCityObject and the data type genericAttribute (cf. Fig. 3.14).

A GenericCityObject may have the attributes class, function, and usage are specified as gml:CodeType. The class attribute allows an object classification within the thematic area such as bridge, tunnel, pipe, power line, dam, or unknown. The function attribute describes to which thematic area the GenericCityObject belongs (e.g. site, transportation, architecture, energy supply, water supply, unknown etc.). The attribute usage can be used, if the object’s usage differs from its function. Each _CityObject and all thematic subclasses can have an arbitrary number of genericAttributes. Data types may be String, Integer, Double (floating point number), URI (Unified Resource Identifier), Date, and gml:MeasureType. The attribute type is defined by the selection of the particular subclass of _genericAttribute (stringAttribute, intAttribute etc.). In addition, generic attributes can be grouped using the genericAttributeSet class which is derived from _genericAttribute and thus is also realized as generic attribute. Its value is the set of contained generic attributes.

_images/citydb_generic_model.png

GenericCityObject model

The geometry of a GenericCityObject can either be an explicit GML3 geometry or an ImplicitGeometry. In the case of an explicit geometry, the object can have only one geometry for each LoD, which may be an arbitrary 3D GML geometry object (class _Geometry, which is the base class of all GML geometries, lodXGeometry, X in 0…4). Absolute coordinates according to the reference system of the city model must be given for the explicit geometry. In the case of an ImplicitGeometry, a reference point (anchor point) of the object and optionally a transformation matrix must be given. In order to compute the actual location of the object, the transformation of the local coordinates into the reference system of the city model must be processed and the anchor point coordinates must be added. The shape of an ImplicitGeometry can be given as an external resource with a proprietary format, e.g. a VRML or DXF file from a local file system or an external web service. Alternatively, the shape can be specified as a 3D GML3 geometry with local Cartesian coordinates using the property relativeGeometry.

In order to specify the exact intersection of the DTM with the 3D geometry of a GenericCityObject, the latter can have TerrainIntersectionCurves for every LoD. This is important for 3D visualization but also for certain applications like driving simulators. For example, if a city wall (e.g., the Great Wall of China) should be represented as a GenericCityObject, a smooth transition between the DTM and the road on the city wall would have to be ensured (in order to avoid unrealistic bumps).

LandUse model

LandUse objects describe areas of the earth’s surface dedicated to a specific land use. They can be employed to represent parcels in 3D. Fig. 3.15 shows the UML diagram of land use objects.

Every LandUse object may have the attributes class (e.g. settlement area, industrial area, farmland etc.), function (purpose, e.g. cornfield), and usage which can be used, if the way the object is actually used differs from the function. Since the attributes usage and function may be used multiple times, storing them in only one string requires a single white space as unique separatorRelational database schema.

_images/citydb_landuse_model.png

LandUse model

The LandUse object is defined for all LoD 0-4 and may have different geometries for each LoD. The surface geometry of a LandUse object is required to have 3D coordinate values. It must be a GML3 MultiSurface, which might be assigned appearance properties like material (X3DMaterial) and texture (_Abstract­Texture and its subclasses).

Digital terrain model

CityGML includes a very adaptable digital terrain model (DTM) which permits the combination of heterogeneous DTM types (grid, TIN, break lines, mass points) available in different levels of detail.

A DTM fitting to a certain city model is represented by the class ReliefFeature. This is a CityObject having the LoD step that fits the DTM as attribute. A relief consists of several ReliefComponents. Each of these components that are likewise CityObjects also comprises a LoD step. Individual geometrical types of the components are defined by the four subclasses of ReliefComponent: breaklines, triangular networks (TINs), mass points, and grids (raster). Geometrically, the corresponding ISO 19107 or GML classes define these types: breaklines by a single MultiCurve, TINs by TriangulatedSurfaces, mass points by MultiPoint, and raster by RectifiedGridCoverage.

_images/citydb_terrain_model.png

UML diagram representing the digital terrain model

A relief can contain ReliefComponents of heterogeneous type and different LoDs. A relief in LoD2, for example, can contain some LoD3-TIN-ReliefComponents beside a LoD2-Raster-ReliefComponent. In some cases even a LoD1 grid may exist in some regions of the relief.

In order to geometrically separate the individual components of a grid, which can exist in different LoD, the validity polygon of a component (extent) is used. This polygon defines the scope in which the component is valid. A grid with three components is shown in Fig. 3.17. It depicts a coarse raster containing two high-resolution TINs (TIN 1 and 2). The validity polygon of the raster is represented by the blue line, while the validity polygons of the TINs are bordered in green and red. In this case, the validity polygon of the raster (grid) has two holes where the raster (grid) is not valid, although it does exist. Instead, the high-resolution TINs are used for the representation of the terrain in these regions. That means the validity polygons of the TINs exactly fit the two holes in the validity polygon of the raster (grid).

_images/citydb_example_relief_components.png

A relief, consisting of three components and its validity polygons (from: [PGKS2005])

In the simplest and most frequent case, the validity polygon of a grid corresponds exactly with its bounding box, i.e. the spatial extent of the grid.

Transportation model

The transportation model of CityGML is a multi-functional, multi-scale model focusing on thematic and functional as well as geometrical/topological aspects. Transportation features are represented as a linear network in LoD0. Starting from LoD1, all transportation features are geometrically described by 3D surfaces.

The main class is TransportationComplex (cf. Fig. 3.19) which represents, for example, a road, a track, a railway, or a square. It is composed of the parts TrafficArea and AuxiliaryTrafficArea. Fig. 3.18 depicts an example for a LoD2 TransportationComplex configuration within a virtual 3D city model. The Road consists of several TrafficAreas for the sidewalks, road lanes, parking lots, and of AuxiliaryTrafficAreas below the raised flower beds.

_images/citydb_lod2_transportation_complex.png

LoD2 representation of a transportation complex (from: [GKCN2008])

The road itself is represented as a TransportationComplex, which is further subdivided into TrafficAreas and AuxiliaryTrafficAreas. The TrafficAreas are those elements, which are important in terms of traffic usage, like car driving lanes, pedestrian zones and cycle lanes. The AuxiliaryTrafficAreas are describing further elements of the road, like kerbstones, middle lanes, and green areas.

_images/citydb_transportation_model.png

UML model for transportation complex

TransportationComplex objects can be thematically differentiated using the subclasses Track, Road, Railway, and Square. Every TransportationComplex has the attributes class, function and usage, referencing to the external code lists. The attribute class describes the classification of the object. The attribute function describes the purpose of the object like, for example national motorway, country road, or airport, while the attribute usage can be used, if the actual usage differs from the function.

In addition, both TrafficArea and AuxiliaryTrafficArea may have the attributes class, function, usage, and surfaceMaterial. The attribute class describe the classification of the object. For TrafficArea, the attribute function describes whether the object is a car driving lane, a pedestrian zone, or a cycle lane, while the usage attribute indicates which modes of transportation can use it (e.g. pedestrian, car, tram, roller skates). The attribute surfaceMaterial specifies the type of pavement and may also be used for AuxiliaryTrafficAreas (e.g. asphalt, concrete, gravel, soil, rail, grass etc.). The function attribute of the AuxiliaryTrafficArea defines, among others, kerbstones, middle lanes, or green areas. The possible values are specified in external code lists.

TransportationComplex is a subclass of _TransportationObject and of the root class _CityObject. The geometrical representation of the TransportationComplex varies through the different levels of detail. In the coarsest LoD0, the transportation complexes are modelled by line objects establishing a linear network. Starting from LoD1, a TransportationComplex provides an explicit surface geometry, reflecting the actual shape of the object, not just its centreline. In LoD2 to LoD4, it is further subdivided thematically into TrafficAreas, which are used by transportation, such as cars, trains, public transport, airplanes, bicycles, or pedestrians and in AuxiliaryTrafficAreas, which are of minor importance for transportation purposes, for example road markings, green spaces or flower tubs.

Tunnel model

The tunnel model is closely related to the building model. It supports the representation of thematic and spatial aspects of tunnels and tunnel parts in four levels of detail, LOD1 to LOD4. The UML diagram of the tunnel model is shown in Fig. 3.21. The pivotal class of the model is _AbstractTunnel, which is a subclass of the thematic class _Site (and transitively of the root class _CityObject). _AbstractTunnel is specialized either to a Tunnel or to a TunnelPart. Since an _AbstractTunnel consists of TunnelParts, which again are _AbstractTunnels, an aggregation hierarchy of arbitrary depth may be realized. Both classes Tunnel and TunnelPart inherit the attributes of _AbstractTunnel: the class of the tunnel, the function, the usage, the year of construction and the year of demolition. In contrast to _AbstractBuilding, Address features cannot be assigned to _AbstractTunnel.

_images/citydb_example_tunnel_parts.png

Example of a tunnel modelled with two tunnel parts

The geometric representation and semantic structure of an _AbstractTunnel is shown in Fig. 3.21. The model is successively refined from LOD1 to LOD4. Therefore, not all components of a tunnel model are represented equally in each LOD and not all aggregation levels are allowed in each LOD. An object can be represented simultaneously in different LODs by providing distinct geometries for the corresponding LODs.

_images/citydb_tunnel_model.png

UML diagram of tunnel model

Similar to the building and bridge models (cf. Section 3.1.5 and Section 3.1.6), only the outer shell of a tunnel is represented in LOD1 – 3, which is composed of the tunnel’s boundary surfaces to the surrounding earth, water, or outdoor air. The interior of a tunnel may only be modelled in LOD4.

In LOD1, a tunnel model consists of a geometric representation of the tunnel volume. Optionally, a MultiCurve representing the TerrainIntersectionCurve can be specified. The geometric representation is refined in LOD2 by additional MultiSurface and MultiCurve geometries. In LOD2 and higher LODs the outer structure of a tunnel can also be differentiated semantically by the classes _BoundarySurface and TunnelInstallation. A boundary surface is a part of the tunnel’s exterior shell with a special function like wall (WallSurface), roof (RoofSurface), ground plate (GroundSurface), outer floor (OuterFloorSurface), outer ceiling (OuterCeilingSurface) or ClosureSurface (see Fig. 3.22). The TunnelInstallation class is used for tunnel elements like outer stairs, strongly affecting the outer appearance of a tunnel. A TunnelInstallation may have the attributes class, function and usage.

_images/citydb_tunnel_boundary_surface.png

Different BoundarySurfaces of a tunnel

In LOD3, the openings in _BoundarySurface objects (doors and windows) can be represented as thematic objects. In LOD4, the highest level of resolution, also the interior of a tunnel, composed of several hollow spaces, is represented in the tunnel model by the class HollowSpace. This enlargement allows a virtual accessibility of tunnels, e.g. for driving through a tunnel, for simulating disaster management or for presenting the light illumination within a tunnel. The aggregation of hollow spaces according to arbitrary, user defined criteria (e.g. for defining the hollow spaces corresponding to horizontal or vertical sections) is achieved by employing the general grouping concept provided by CityGML (cf. Section 3.1.2).

Interior installations of a tunnel, i.e. objects within a tunnel which (in contrast to furniture) cannot be moved, are represented by the class IntTunnelInstallation. If an installation is attached to a specific hollow space (e.g. lamps, ventilator), they are associated with the HollowSpace class, otherwise (e.g. pipes) with _AbstractTunnel. A HollowSpace may have the attributes class, function and usage whose possible values can be enumerated in code lists. The class attribute allows a general classification of hollow spaces, e.g. commercial or private rooms, and occurs only once. The function attribute is intended to express the main purpose of the hollow space, e.g. control area, installation space, and storage space. The attribute usage can be used if the way the object is actually used differs from the function. Both attributes can occur multiple times.

The visible surface of a hollow space is represented geometrically as a Solid or MultiSurface. Semantically, the surface can be structured into specialized _BoundarySurfaces, representing floor (FloorSurface), ceiling (CeilingSurface), and interior walls (InteriorWallSurface). Hollow space furniture, like movable equipment in control areas, can be represented in the CityGML tunnel model with the class TunnelFurniture. A TunnelFurniture may have the attributes class, function and usage.

Vegetation model

The vegetation model of CityGML distinguishes between solitary vegetation objects like trees and vegetation areas, which represent biotopes like forests or other plant communities. Single vegetation objects are modelled by the class SolitaryVegetationObject, while for areas filled with specific vegetation the class PlantCover is used.

_images/citydb_example_vegetation_model.png

Image illustrates objects of the vegetation model (from: [GKCN2008])

The geometry representation of a PlantCover feature may be a MultiSurface or a MultiSolid, depending on the vertical extent of the vegetation. For example, regarding forests, a MultiSolid representation might be more appropriate (cf. Fig. 3.23).

The UML diagram of the vegetation model is depicted in Fig. 3.24. A SolitaryVegetation­Object may have the attributes class (e.g. tree, bush, grass), species (species’ name, e.g. Abies alba), usage, and function (e.g. botanical monument), height, trunkDiameter and crownDiameter. A PlantCover feature may have the attributes class (plant community), usage, function (e.g. national forest) and averageHeight. Since both SolitaryVegetationObject and PlantCover are CityObjects, they inherit all attributes of a city object, in particular its name (gml:name) and an ExternalReference to a corresponding object in an external information system, which may contain botanical information from public environmental agencies.

_images/citydb_vegetation_model.png

Vegetation Model

The geometry of a SolitaryVegetationObject may be defined in LoD 1-4 by absolute coordinates, or prototypically by an ImplicitGeometry. Season dependent appearances may be mapped using ImplicitGeometries. For visualisation purposes, only the content of the library object defining the object’s shape and appearance has to be swapped.

A SolitaryVegetationObject or a PlantCover may have a different geometry in each LoD. Whereas a SolitaryVegetationObject is associated with the _Geometry class representing an arbitrary GML geometry (by the relation lodXGeometry), a PlantCover is restricted to be either a MultiSolid or a MultiSurface.

WaterBody model

The water bodies model represents the thematic aspects and 3D geometry of rivers, canals, lakes, and basins. In LoD 2-4 water bodies are bounded by distinct thematic surfaces. These surfaces are the obligatory WaterSurface, defined as the boundary between water and air, the optional WaterGroundSurface, defined as the boundary between water and underground (e.g. DTM or floor of a 3D basin object), and zero or more WaterClosureSurfaces, defined as virtual boundaries between different water bodies or between water and the end of a modelled region (cf. Fig. 3.25). A dynamic element may be the WaterSurface to represent temporarily changing situations of tidal flats.

_images/citydb_waterbody_definitions.png

Definition of waterbody attributes (from: [GKNH2012])

Each WaterBody object may have the attributes class (e.g. lake, river, or fountain), function (e.g. national waterway or public swimming) and usage (e.g. navigable) referencing to external code lists. Since the attributes usage and function may be used multiple times, storing them in only one string requires a unique delimiter.

WaterBody is a subclass of the root class _CityObject. The geometrical representation of the WaterBody varies for different levels of detail. The WaterBody can be differentiated semantically by the class _WaterBoundarySurface. A _WaterBoundarySurface is a part of the water body’s exterior shell with a special function like WaterSurface, WaterGroundSurface or WaterClosureSurface. As with any _CityObject, WaterBody objects as well as WaterSurface, WaterGroundSurface, and WaterClosureSurface objects may be assigned ExternalReferences and GenericAttributes.

Both LoD0 and LoD1 represent a low level of illustration and high grade of generalisation. Here the rivers are modelled as MultiCurve geometry and brooks are omitted. Seas, oceans, and lakes with significant extent are represented as MultiSurfaces. (cf. Fig. 3.26)

_images/citydb_waterbody_model.png

Waterbody model

Starting from LoD1, water bodies may also be modelled as volumes filled with water, represented by Solids. If a water body is represented by a Solid in LoD2 or higher, the surface geometries of the corresponding thematic WaterClosureSurface, WaterGroundSurface, and WaterSurface objects must coincide with the exterior shell of the Solid. This can be ensured, if for one LoD X the respective lodXSurface elements (where X is between 2 and 4) of WaterClosureSurface, WaterGroundSurface, and WaterSurface reference the corresponding polygons (using XLink) within the CompositeSurface that defines the exterior shell of the Solid. Furthermore, every _WaterBoundarySurface must have at least one associated surface geometry attached.

The water body model implicitly includes the concept of TerrainIntersectionCurves (TIC), e.g. to specify the exact intersection of the DTM with the 3D geometry of a WaterBody or to adjust a WaterBody or WaterSurface to the surrounding DTM. The rings defining the WaterSurface polygons implicitly delineate the intersection of the water body with the terrain or basin.

Relational database schema

The simplified UML database model defined in the previous Section 3.1 has been mapped to a relational schema based on some general mapping rules and conventions. The following sections discuss these rules and present the relational schema in detail using entity-relationship diagrams. Based on this work, the platform-specific SQL scripts for setting up the 3D City Database on PostgreSQL and Oracle have been derived automatically.

Mapping rules and metadata

Mapping of classes onto tables

In general, every class of the UML diagram is mapped onto a separate table; the name of the table is identical to the class name (leading underscores indicating abstract classes are omitted). If multiple classes are contained in an orange box in the UML diagram though, these classes are mapped onto a single table in the relational schema.

Scalar attributes of the classes become columns of the corresponding table with identical name. The types of attributes are customized to corresponding data types of the target database systems PostgreSQL/PostGIS and Oracle as shown in the following Table 3.1.

Data type mapping (excerpt)
UML
PostgreSQL / PostGIS
Oracle
String, anyURI
VARCHAR, TEXT
VARCHAR2, CLOB
Integer
NUMERIC
NUMBER
Double, gml:LengthType
DOUBLE PRECISION
BINARY_DOUBLE
Boolean
NUMERIC
NUMBER(1,0)
Date
DATE or
TIMESTAMP WITH TIME ZONE
DATE or
TIMESTAMP WITH TIME ZONE
Complex Types
(Color, TransformationMatrix,
CodeType etc.)
VARCHAR
VARCHAR2
Enumeration
VARCHAR
VARCHAR2
GML Geometry,
textureCoordinates
GEOMETRY
SDO_GEOMETRY
GML RectifiedGridCoverage
RASTER
SDO_GEORASTER
& SDO_RASTER
Texture (only reference
of type anyURI in CityGML)
BYTEA
BLOB
Explicit metadata about feature classes

The central metadata table OBJECTCLASS contains all feature classes supported by the 3D City Database. Every CityGML feature class is assigned a unique and stable ID in this table. For example, Building is assigned the ID value 26 and Bridge has the value 64. In addition, the name of the feature class is stored in the attribute CLASSNAME. The name of the table onto which the feature class has been mapped is provided by the TABLENAME column.

The SUPERCLASS_ID attribute references the direct superclass of the feature class and, thus, maps the class hierarchy. The additional BASECLASS_ID attribute points to the root class of the hierarchy and can be used to quickly understand whether an entry in OBJECTCLASS represents a GML feature type, object type or data type without having to traverse the entire class hierarchy. If a feature class represents a CityGML top-level feature, the IS_TOPLEVEL flag is set to 1 and 0 otherwise.

All city objects stored in the 3D City Database are registered in the root table CITYOBJECT. This table has an attribute OBJECTCLASS_ID which references an entry in OBJECTCLASS. This way, the class of the city object can be easily and efficiently identified. If required, its class name, its feature table, whether it is a top-level feature, and even its (transitive) superclasses can also be queried. In addition to CITYOBJECT, all tables that are used to store CityGML features provide an OBJECTCLASS_ID attribute.

Note

Registering CityGML ADEs in the 3D City Database leads to additional entries in the OBJECTCLASS table for each class defined in the ADE. The OBJECTCLASS table has two more attributes IS_ADE_CLASS and ADE_ID which are used to manage and identify ADE classes. More information is provided in Section 3.2.16 for more information.

Excerpt of the OBJECTCLASS table
ID
CLASSNAME
SUPERCLASS_ID
0
Undefined

1
_GML

2
_Feature
1
3
_CityObject
2
4
LandUse
3
5
GenericCityObject
3
6
_VegetationObject
3
7
SolitaryVegetationObject
6
8
PlantCover
6
9
WaterBody
105
10
_WaterBoundarySurface
3
11
WaterSurface
10
12
WaterGroundSurface
10
13
WaterClosureSurface
10
14
ReliefFeature
3
15
_ReliefComponent
3
16
TINRelief
15
17
MassPointRelief
15
18
BreaklineRelief
15
19
RasterRelief
15
20
_Site
3
21
CityFurniture
3
22
_TransportationObject
3
23
CityObjectGroup
3
24
_AbstractBuilding
20
25
BuildingPart
24
26
Building
24
27
BuildingInstallation
3
28
IntBuildingInstallation
3
29
_BuildingBoundarySurface
3
30
BuildingCeilingSurface
29
31
InteriorBuildingWallSurface
29
32
BuildingFloorSurface
29
33
BuildingRoofSurface
29
34
BuildingWallSurface
29
35
BuildingGroundSurface
29
36
BuildingClosureSurface
29
37
_BuildingOpening
3
38
BuildingWindow
37
39
BuildingDoor
37
40
BuildingFurniture
3
41
BuildingRoom
3
42
TransportationComplex
22
43
Track
42
44
Railway
42
45
Road
42
46
Square
42
47
TrafficArea
22
48
AuxiliaryTrafficArea
22
49
FeatureCollection
2
50
Appearance
2
51
_SurfaceData
2
52
_Texture
51
53
X3DMaterial
51
54
ParameterizedTexture
52
55
GeoreferencedTexture
52
56
_TextureParametrization
1
57
CityModel
49
58
Address
2
59
ImplicitGeometry
1
60
OuterBuildingCeilingSurface
29
61
OuterBuildingFloorSurface
29
62
_AbstractBridge
20
63
BridgePart
62
64
Bridge
62
65
BridgeInstallation
3
66
IntBridgeInstallation
3
67
_BridgeBoundarySurface
3
68
BridgeCeilingSurface
67
69
InteriorBridgeWallSurface
67
70
BridgeFloorSurface
67
71
BridgeRoofSurface
67
72
BridgeWallSurface
67
73
BridgeGroundSurface
67
74
BridgeClosureSurface
67
75
OuterBridgeCeilingSurface
67
76
OuterBridgeFloorSurface
67
77
_BridgeOpening
3
78
BridgeWindow
77
79
BridgeDoor
77
80
BridgeFurniture
3
81
BridgeRoom
3
82
BridgeConstructionElement
3
83
_AbstractTunnel
20
84
TunnelPart
83
85
Tunnel
83
86
TunnelInstallation
3
87
IntTunnelInstallation
3
88
_TunnelBoundarySurface
3
89
TunnelCeilingSurface
88
90
InteriorTunnelWallSurface
88
91
TunnelFloorSurface
88
92
TunnelRoofSurface
88
93
TunnelWallSurface
88
94
TunnelGroundSurface
88
95
TunnelClosureSurface
88
96
OuterTunnelCeilingSurface
88
97
OuterTunnelFloorSurface
88
98
_TunnelOpening
3
99
TunnelWindow
98
100
TunnelDoor
98
101
TunnelFurniture
3
102
HollowSpace
3
103
TexCoordList
56
104
TexCoordGen
56
105
_WaterObject
3
106
_BrepGeometry
0
107
Polygon
106
108
BrepAggregate
106
109
TexImage
0
110
ExternalReference
0
111
GridCoverage
0
112
_genericAttribute
0
113
genericAttributeSet
112

Core schema

CITYOBJECT, CITYOBJECT_SEQ

All CityObjects (and instances of the subclasses like Buildings etc.) are represented by tuples in the root table CITYOBJECT. The fields are identical to the attributes of the corresponding UML class, plus additional columns for metadata like LAST_MODIFICATION_DATE, UPDATING_PERSON, REASON_FOR_UPDATE and LINEAGE.

The bounding box (gml:Envelope) is stored as rectangular geometry using five points, that join the minimum and maximum x, y and z coordinates of the bounding box and define it completely. For backwards compatibility reasons (to Oracle 10g), the envelope cannot be stored as a volume.

_images/citydb_envelope_definition.png

The CityObject’s envelope specified by two points with minimum and maximum coordinate values (left: black points) is stored as a 3D rectangle (right: black polygon using five points)

In order to identify each object, a unique identifier is essential. Therefore, the column GMLID stores the gml:id value of every city object. But since gml:ids cannot be guaranteed to be unique over different CityGML files, the column GMLID_CODESPACE is provided in addition. It may contain, for instance, the full path to the imported CityGML file containing the object. The combination of GMLID and GMLID_CODESPACE should be ensured to be unique for each CityObject.

The attributes NAME or NAME_CODESPACE can contain more than one gml:name property. In this case they have to be separated by the string ‘–/--’ (without quotes). The CityGML exporter will then create multiple occurrences of <gml:name> elements.

The attribute OBJECTCLASS_ID provides information on the class affiliation of the CityObject (see Section 3.2.1.2). This helps, for instance, to identify the proper subclass tables.

The next free ID value for the table CITYOBJECT is provided by the database sequence CITYOBJECT_SEQ. This ID is also reused as foreign key in the separate tables of the different thematic features.

CITYMODEL, CITYMODEL_SEQ

CityObject features may be aggregated to a single CityModel. A CityModel serves as root element of a CityGML feature collection. In order to provide a unique identifier in table CITYMODEL, the next available ID value is provided by the sequence CITYMODEL_SEQ.

EXTERNAL_REFERENCE, EXTERNAL_REF_SEQ

The table EXTERNAL_REFERENCE is used to store external references; the foreign key CITYOBJECT_ID refers to the associated CityObject. The sequence EXTERNAL_REF_SEQ provides the next available ID value for EXTERNAL_REFERENCE.

CITYOBJECTGROUP, GROUP_TO_CITYOBJECT

The n:m relationship between an object group (table CITYOBJECTGROUP) consisting of city objects contained in CITYOBJECT is realized by the table GROUP_TO_CITYOBJECT, which associates the IDs of both tables. The following tables shows an example, in which two buildings are grouped to a hotel complex.

CITYOBJECTGROUP table (excerpt)
ID
CLASS
CLASS_
CODESPACE
FUNCTION
FUNCTION_
CODESPACE
USAGE
USAGE_
CODESPACE
1
NULL
NULL
Building
group
NULL
Hotel
NULL
GROUP_TO_CITYOBJECT table
CITYOBJECT_ID
CITYOBJECTGROUP_ID
ROLE
2
1
Main building
4
1
Annex
CITYOBJECT table (excerpt)
ID
OBJECTCLASS
_ID
GML_ID
ENVELOPE
CREATION
_DATE
TERMINATION
_DATE
2
26
Build1632
GEOMETRY
2015-02-02
09:26:07.441+01
NULL
4
26
Build1633
GEOMETRY
2015-02-02
09:26:07.441+01
NULL
1
23
Group1700
NULL
2015-02-02
09:26:07.441+01
NULL

For attributes CLASS, FUNCTION and USAGE there is an additional _CODESPACE column in order to specify the source of code lists used for values (e.g. by a globally unique URL). As a CityGML feature like CityObjectGroup can have multiple instances of attributes class, function and usage but only one target column exist in the table, values are separated by the string sequence ‘–/--’ (without quotes). The CityGML exporter will then create multiple occurrences of corresponding elements. Normalization rules were not applied in this case in order to avoid many joins when querying all information of building objects. Array types weren’t used either as their implementation varies between different database systems.

This concept applies to all CityGML features and can therefore be found in every object table (except for boundary surfaces of buildings, bridges and tunnels). They do not appear once in the CITYOBJECT table, because they are belonging to the namespace of a certain thematic module and should be stored along with other attributes of that feature.

_images/citydb_schema_core.png

Database schema of the CityGML core elements

Geometry schema

The representation of the geometry stored in table SURFACE_GEOMETRY differs substantially from the UML diagram explained in the CityGML specification (also see Section 3.1.3.1); nevertheless, it offers about the same functionality.

SURFACE_GEOMETRY, SURFACE_GEOMETRY_SEQ

In the database schema, the geometry consists of planar surfaces which each correspond to one entry in the table SURFACE_GEOMETRY. The surface-based geometry of a feature is stored as attribute GEOMETRY (for every entry exactly one planar polygon, possibly including holes). An implicit geometry of the feature is stored as attribute IMPLICIT_GEOMETRY. A volumetric geometry is stored as attribute SOLID_GEOMETRY for the root entry of the geometry hierarchy, whereas its individual polygons that make up the shell of the solid are stored as separate entries within the hierarchy using the attribute GEOMETRY. Any surface may have textures or a colour on both sides. Textures are stored within the tables which implement the appearance model (cf. Section 3.1.4).

The geometry information in the fields GEOMETRY and IMPLICIT_GEOMETRY of the table SURFACE_GEOMETRY is limited as follows:

Storage of polygonal geometry
Oracle
PostGIS
- SDO_GTYPE must have the type Polygon, i.e. a
polygon with 3D coordinates (SDO_GTYPE = 3003)

- SDO_ETYPE must be 1003/2003 with
SDO_INTERPRETATION = 1 (i.e. polygon with
3D coordinates in the boundary, bounded just by
linesegments, possibly including holes)

- In addition Oracle allows the representation
of a rectangle by two corner points
(SDO_ETYPE=1003/2003,
with SDO_INTERPRETATION = 3)

- SDO_SRID of implicit geometries can be
any SRID Oracle supports. No spatial index
is defined on the column by default.
- Only POLYGON Z is allowed, i.e. a polygon
with 3D coordinates

- Polygons might have holes

- The IMPLICIT_GEOMETRY column has no
SRID defined. Thus, entries in that column
will have the SRID 0 automatically

A solid is the basis for 3-dimensional geometry. The extent of a solid is defined by its boundary surfaces (outer shell). A shell is represented by a composite surface, where every shell is used to represent a single connected component of the boundary of a solid. The composite surface (a list of OrientableSurfaces) describing a shell must be connected in a topological cycle. Unlike a ring, a shell’s elements have no natural sort order. Like rings, shells are simple. The geometry in the field SOLID_GEOMETRY of the table SURFACE_GEOMETRY is limited as follows:

Storage of 3D geometry
Oracle
PostGIS
- SDO_GTYPE must have the type Solid, i.e. a
solid with 3D coordinates (SDO_GTYPE = 3008)

- SDO_ETYPE must be 1007 (simple solid) or
1008 (composite solid)

- A simple solid can be represented by using
several polygons as its boundary
(SDO_ETYPE=1007,
with SDO_INTERPRETATION = 1)

- A composite solid can be constructed with
a number of simple solids, e.g. a composite
solid with 4 simple solids (SDO_ETYPE=1008,
with SDO_INTERPRETATION = 4)
- Only POLYHEDRALSURFACE is allowed, i.e.
the outer shell of a solid with 3D coordinates

- A simple polyhedral surface can be represented
by using several polygons as its boundary

Surfaces can be aggregated to form a complex of surfaces or a volumetric object. For example, assume we want to store a volumetric geometry in SURFACE_GEOMETRY as shown in Fig. 3.29. The separate polygons forming the shell of the solid are represented as individual entries in SURFACE_GEOMETRY, and each entry uses the attribute GEOMETRY to store the polygon (IDs 6 to 10 in Fig. 3.29).

Next, we combine these polygons to a composite surface that forms the shell of our volumetric geometry. For this purpose, another entry is added to SURFACE_GEOMETRY to represent the composite surface (ID 2 in Fig. 3.29). This new entry is not assigned a geometry in the GEOMETRY attribute. Instead, the polygons with IDs 6 to 10 reference this entry as their parent entry using the PARENT_ID attribute. To mark the new entry as composite surface, the IS_COMPOSITE flag is set to 1.

As last step, we have to add another entry that represents our final solid (ID 1 in Fig. 3.29). For this entry, the IS_SOLID flag is set to 1, and the composite surface (ID 2) references it using the PARENT_ID attribute. The new solid entry represents the root of our geometry hierarchy. For this reason, every member of the hierarchy (including the root entry itself) must reference the ID of the root entry through the ROOT_ID attribute (see Fig. 3.29). For the root entry (and only for the root entry) the entire volumetric geometry is stored in addition as 3D geometry in the attribute SOLID_GEOMETRY, whereas the GEOMETRY attribute is not assigned.

_images/citydb_schema_example_geometry_hierarchy.png

Geometry hierarchy for the solid geometry shown in Fig. 3.30

Storing the ROOT_ID for every member of the aggregation hierarchy has a big influence on query performance, as it allows to retrieve all members of the hierarchy with a single query (WHERE ROOT_ID = x) and, thus, to avoid recursive queries. If, for instance, all surface elements forming the geometry of a specific building shall be retrieved, then simply the foreign key reference to SURFACE_GEOMETRY stored in the BUILDING table has to be used as ROOT_ID to query all surface elements belonging to the geometry. On the downside, storing the ROOT_ID explicitly also faces the limitation that each tuple in SURFACE_GEOMETRY can only belong to one aggregate.

Various flags characterise the type of aggregation: IS_TRIANGULATED denotes a TriangulatedSurface, IS_SOLID distinguishes between surface (0) and solid (1), and IS_COMPOSITE defines whether the entry represents an aggregate (0, e.g. MultiSolid, MultiSurface) or a composite (1, e.g., CompositeSolid, CompositeSurface).

Based on these flags the geometry types listed in Table 3.8 can be distinguished. To distinguish a MultiSolid from a MultiSurface its child elements have to be analysed: In case the child is a Solid, the geometry can be identified as MultiSolid.

Attributes determining aggregation types

isSolid
isComposite
isTriangulated
GEOMETRY
SOLID_
GEOMETRY
Polygon, Triangle,
Rectangle



NULL
MultiSurface



NULL
NULL
CompositeSurface

1

NULL
NULL
TriangulatedSurface


1
NULL
NULL
Solid
1


NULL
MultiSolid



NULL
NULL
CompositeSolid
1
1

NULL

Aggregated surfaces can be grouped again with other (compound) surfaces, by generating a common parent. This way, arbitrary aggregations of Surfaces, CompositeSurfaces, Solids, CompositeSolids can be formed. Since all tuples in an aggregated geometry refer to the same ROOT_ID all tuples can be retrieved efficiently from the table by selecting those tuples with the same ROOT_ID.

The aggregation schema allows for the definition of nested aggregations (hierarchy of components). For example, a building geometry (CompositeSolid) can be composed of the house geometry (CompositeSolid) and the garage geometry (Solid), while the house’s geometry is further decomposed into the roof geometry (Solid) and the geometry of the house body (Solid).

In addition, the foreign key CITYOBJECT_ID refers directly to the CityGML features to which the geometry belongs. In order to select all geometries forming the city object one only has to select those with the same CITYOBJECT_ID.

In order to provide a unique identifier in table SURFACE_GEOMETRY, the next available ID value is provided by the sequence SURFACE_GEOMETRY_SEQ.

Example: The geometry shown in the figure below consists of seven surfaces which form a volumetric object. In the table it is represented by the following rows:

_images/citydb_schema_example_lod1solid_building.png

LoD 1 building - closed volume bounded by a CompositeSurface which consists of single polygons

Excerpt of table SURFACE_GEOMETRY representing the example given in Fig. 3.30
ID
GMLID
PARENT_
ID
ROOT_
ID
IS_
SOLID
IS_COM
POSITE
GEOMETRY
SOLID_
GEOMETRY
1
UUID
_lod1
NULL
1
1
0
NULL
GEOMETRY
for Solid
2
lod1
Surface
1
1
0
1
NULL
NULL
3
Left1
2
1
0
0
GEOMETRY
for surface 3
NULL
4
Front1
2
1
0
0
GEOMETRY
for surface 4
NULL
5
Right1
2
1
0
0
GEOMETRY
for surface 5
NULL
6
Back1
2
1
0
0
GEOMETRY
for surface 6
NULL
7
Roof1
2
1
0
0
GEOMETRY
for surface 7
NULL

In addition, two further attributes are included in SURFACE_GEOMETRY: IS_XLINK and IS_REVERSE.

IS_XLINK

CityGML allows for sharing of geometry objects between different geometries or different thematic features using the XLink concept of GML3. For this purpose, the geometry object to be shared is assigned an unique gml:id which may be referenced by a GML geometry property element through its xlink:href attribute. This concept allows for avoiding data redundancy. Furthermore, CityGML does not employ the built-in topology package of GML3 but rather uses the XLink concept for the explicit modelling of topology (see [GKCN2008] p. 25).

Although an XLink can be seen as a pointer to an existing geometry object the SURFACE_GEOMETRY table does not offer a foreign key attribute which could be used to refer to another tuple within this table. The main reason for this is that the referenced tuple typically belongs to a different geometry aggregate, e.g. a different gml:Solid object, and thus contains different values for its ROOT_ID and PARENT_ID attributes. Therefore, foreign keys would violate the aggregation mechanism of the SURFACE_GEOMETRY table.

The recommended way of resolving of XLink references to geometry objects requires two steps: First, the referenced tuple of the SURFACE_GEOMETRY table has to be identified by searching the GMLID column for the referenced gml:id value. Second, all attribute values of the identified tuple have to be copied to a new tuple. However, the ROOT_ID and PARENT_ID of this new tuple have to be set according to the context of the referencing geometry property element.

Please note:

  1. If the referenced tuple is the top of an aggregation (sub)hierarchy within the SURFACE_GEOMETRY table, then also all nested tuples have to be recursively copied and their ROOT_ID and PARENT_ID have to be adapted.
  2. Copying existing entries of the SURFACE_GEOMETRY table results in tuples sharing the same GMLID. Thus, these values cannot be used as a primary key.

When it comes to exporting data to a CityGML instance document, XLink references can be rebuilt by keeping track of the GMLID values of exported geometry tuples. Generally, for each and every tuple to be exported it has to be checked whether a geometry object with the same GMLID value has already been processed. If so, the export routine should make use of an XLink reference.

However, checking the GMLID of each and every tuple may dramatically slow down the export process. For this reason, the IS_XLINK flag of the SURFACE_GEOMETRY has been introduced. It may be used to explicitly mark just those tuples for which a corresponding check has to be performed.

  1. During import
  1. By default, the IS_XLINK flag is set to “0”.
  2. If existing tuples have to be copied due to an XLink reference, IS_XLINK has to be set for each and every copy to either “1” for global XLinks or “2” for local XLinks. Please note, that this rule comprises all copies of nested tuples.
  3. Furthermore, IS_XLINK has to be set to “1” or “2” on the original tuple addressed by the XLink reference. If this tuple is the top of an aggregation (sub)hierarchy, IS_XLINK remains “0” for all nested tuples.

Note

Local XLinks reference a geometry within the same top-level feature, whereas global XLinks reference a geometry from another top-level feature.

If an import tool cannot tell the difference between local and global references, then the value “1” shall be used for all IS_XLINK attributes.

  1. During export
  1. The export process just has to keep track of the GMLID values of those geometry tuples where IS_XLINK is set to “1” or “2”.
  2. When it comes to exporting a tuple with IS_XLINK set to “1” or “2”, the export process has to check whether it already came across the same GMLID and, thus, can make use of an XLink reference in the instance document.
  3. For each tuple with IS_XLINK=0 no further action has to be taken.

Especially due to 2c), the IS_XLINK attribute helps to significantly speed up the export process when rebuilding XLink references. Please note, that this is the only intended purpose of the IS_XLINK flag.

It also makes a difference whether the IS_XLINK attribute is set to “1” or “2”: If the export tool comes across a local reference (2), the GMLID of this geometry tuple only needs to be cached while exporting the top-level feature and can be released afterwards. Only global references (1) need to be cached during the entire export process.

Note

The Importer/Exporter provides a reference implementation for how to correctly copy referenced geometries and use the IS_XLINK flag. Simply use the tool to import test datasets and to check how the SURFACE_GEOMETRY table is populated.

IS_REVERSE

The IS_REVERSE flag is used in the context of gml:OrientableSurface geometry objects. Generally, an OrientableSurface instance cannot be represented within the SURFACE_GEOMETRY table since it cannot be encoded using the flags IS_SOLID, IS_COMPOSITE, and IS_TRIANGULATED (cf. Table 3.8). However, the IS_REVERSE flag is used to encode the information provided by an OrientableSurface and to rebuild OrientableSurfaces during data export.

According to GML3, an OrientableSurface consists of a base surface and an orientation. If the orientation is “+”, then the OrientableSurface is identical to the base surface. If the orientation is “-“, then the OrientableSurface is a reference to a surface with an up-normal that reverses the direction for this OrientableSurface.

During import, only the base surfaces are written to the SURFACE_GEOMETRY table. The following rules have to be obeyed in the context of OrientableSurface:

  1. If the orientation of the OrientableSurface is “-“, then
  1. The direction of the base surface has to be reversed prior to importing it (generally, this means reversing the order of coordinate tuples).
  2. The IS_REVERSE flag has to be set to “1” for the corresponding entry in the SURFACE_GEOMETRY table.
  3. If the base surface is an aggregate, then steps a) and b) have to be recursively applied for all of its surface members.
  1. If the OrientableSurface is identical to its base surface (i.e., if its orientation is “+”), then the base surface can be written to the SURFACE_GEOMETRY table without taking any further action. The IS_REVERSE flag has to be set to “0” (which is also the default value).
  2. Please note, that it is not sufficient to just rely on the gml:orientation attribute of an OrientableSurface in order to determine its orientation since OrientableSurfaces may be arbitrarily nested.

Flipping the direction of the base surface in step 1a) is essential in order to guarantee that the objects stored within the GEOMETRY column are always correctly oriented. This enables applications to just access the GEOMETRY column without having to interpret further attributes of the SURFACE_GEOMETRY table. For example, in the case of a viewer application this allows for a fast rendering of a virtual 3d city scene.

When exporting CityGML instance documents, the IS_REVERSE flag can be used to rebuild OrientableSurface in the following way:

  1. If the IS_REVERSE flag is set to “1” for a table entry, the exporter routine has to reverse the direction of the corresponding surface object prior to exporting it (again, this means reversing the order of coordinate tuples).
  2. The surface object has to be wrapped by a gml:OrientableSurface object with gml:orientation=”-”.
  3. If the surface object is an aggregate, its surface members having the same value for the IS_REVERSE flag may not be embraced by another OrientableSurface. However, if the IS_REVERSE value changes, e.g., from “1” for the aggregate to “0” for the surface member, also the surface member has to be embraced by a gml:OrientableSurface according to (2). Since there might be nested structures of arbitrary depth this third rule has to be applied recursively.

Note

Like with the IS_XLINK flag, the Importer/Exporter tool provides a reference implementation of the IS_REVERSE flag.

Appearance schema

APPEARANCE, APPEARANCE_SEQ

The table APPEARANCE contains information about the surface data of objects (attribute DESCRIPTION), its category is stored in attribute THEME. Since each city model or city object may store its own appearance data, the table APPEARANCE is related to the tables for the base classes CityObject and CityModel by two foreign keys which may be used alternatively. The classes Appearance and _SurfaceData represent features, which can be referenced by GML identifiers. For this reason, the attributes GMLID and GMLID_CODESPACE were added to the corresponding tables.

_images/citydb_schema_appearance.png

Appearance database schema

SURFACE_DATA, TEX_IMAGE, APPEAR_TO_SURFACE_DATA

An appearance is composed of data for each surface geometry object. Information on the data types and its appearance are stored in the table SURFACE_DATA.

IS_FRONT determines the side a surface data object applies to (IS_FRONT=1: front face IS_FRONT=0: back face of the geometry). The OBJECTCLASS_ID column denotes if materials or textures are used for the specific object (values: X3DMaterial, Texture or GeoreferencedTexture). Materials are specified by the attributes X3D_xxx which define its graphic representation. Details on using georeferenced textures, such as orientation and reference point, are contained in attributes GT_xxx. See Section 3.1.4 for more information on SURFACE_DATA attributes or the CityGML specification (cf. [GKNH2012], p. 33-45 ) which explains the texture mapping process in detail.

Raster-based 2D textures are stored in table TEX_IMAGE. The name of the corresponding images for example is specified by the attribute TEX_IMAGE_URI. The texture image can be stored within this table in the attribute TEX_IMAGE_DATA using the BLOB data type under Oracle and the BYTEA data type under PostgreSQL.

Table APPEAR_TO_SURFACE_DATA represents the interrelationship between appearances and surfaces for different themes.

TEXTUREPARAM

Attributes for mapping textures to geometry objects (point list or transformation matrix) which are defined by the CityGML classes _TextureParameterization, TexCoordList, and TexCoordGen are stored in the table TEXTUREPARAM.

_images/citydb_schema_example_appearance_texture.png

Simple example explaining texture mapping using texture coordinates

Example for table TEXTUREPARAM
SURFACE_
GEOMETRY_ID
IS_TEXTURE
_PARAMETRIZATION
WORLD_TO
_TEXTURE
TEXTURE_
COORDINATES
SURFACE_
DATA_ID
7
1
NULL
GEOMETRY
20

Texture coordinates are applicable to polygonal surfaces, whose boundaries are described by a closed linear ring (last coordinate is equal to first). Coordinates are stored with a geometry data type. The WORLD_TO_TEXTURE attribute defines a transformation matrix from a location in world space to texture space. For more details see the CityGML Implementation Specification [GKNH2012].

_images/citydb_schema_example_building_appearance.png

Visualisation of a simple building in LoD1 and LoD2 using the appearance model. Two themes are defined for the building and the surrounding terrain: (a) building in summertime and (b) building in wintertime

Six surface representations are listed in table SURFACE_DATA (cf. Fig. 3.39). First of all, a homogeneous material is defined (ID=1), represented by a 3-component (RGB) colour value which will be used for both appearances (summer and winter). This also applies to a general side façade texture (ID=3, Fig. 3.36 right) which is repeated (wrapped) to fill the entire surface. For each of the front side, the back side and the ground two images are available: parameterized ones for the sides (Fig. 3.36 left and middle) and georeferenced ones for the ground and the roof surfaces (Fig. 3.35). The information of textures is stored in a separate table TEX_IMAGE. The coordinates for mapping the textures to the object are stored in table TEXTUREPARAM. For the general side texture (SURFACE_DATA_ID=3) five coordinate pairs are needed to define a closed ring (here: rectangle). Table SURFACE_GEOMETRY contains the information of all geometry parts that form the building and its appropriate 3D coordinates.

See the following page for an example of the storage of appearances in the city database. Fig. 3.36 and Fig. 3.35 show the images used for texturing a building in LoD2. In LoD1, a material definition is used to define the wall colors of the building.

Fig. 3.37 to Fig. 3.41 show a combination of tables representing the building’s textures. There are different images available for summer and winter resulting in two themes: Summer and Winter. The tuples within the tables are color-coded according to their relation to the respective theme:

  • Green: only summer related data
  • Light-grey: only winter related data
  • Orange: both summer and winter related data

Fig. 3.34 shows the LoD2 representation of summer appearances (theme Summer).

_images/citydb_schema_example_lod2Surface_building.png

Surface geometries for the building in LoD2

_images/citydb_schema_images_georeferenced_textures.png

Images for georeferenced textures. The image ground_winter.png is assigned to the terrain and the roof surfaces of the building both in LoD1 and LoD2 within the winter theme (a), ground_summer.png within the summer theme (b)

_images/citydb_schema_images_parameterized_textures.png

Images for parameterized textures

_images/citydb_schema_APPEARANCE_table_figure.png

Excerpt of table APEARANCE, The relation to the building feature is given by the foreign key CITYOBJECT_ID

_images/citydb_schema_APPEAR_TO_SURFACE_table_figure.png

APPEAR_TO_SURFACE table

_images/citydb_schema_surface_data_table_figure.png

Excerpt of table SURFACE_DATA table

_images/citydb_schema_tex_image_table_figure.png

Excerpt of table TEX_IMAGE table

_images/citydb_schema_TEXTUREPARAM_table_figure.png

TEXTUREPARAM Table

Building schema

_images/citydb_schema_building_diagram.png

Building database schema

BUILDING

The building model, described in Section 3.1.5 at the conceptual level, is realised by the tables shown in Fig. 3.42. The three CityGML classes AbstractBuilding, Building and BuildingPart are merged into the single table BUILDING. They can be distinguished on behalf of the OBJECTCLASS_ID. The subclass relationship with CITYOBJECT arises from using identical IDs, i.e. for each tuple in BUILDING there must exist a tuple within CITYOBJECT with the same ID.

Tree-like structure for recursive decomposition of buildings
ID
BUILDING_
PARENT_ID
BUILDING_
ROOT_ID
LOD0_
FOOT
PRINT_ID
LOD1_
MULTISUR
FACE_ID
LOD4_
SOLID_
ID
1
NULL
1

10

NULL

NULL
2
1
1

NULL

20

NULL
3
1
1

NULL

30

NULL
4
2
1

NULL

NULL

400
5
2
1

NULL

NULL

500
6
3
1

NULL

NULL

600
7
3
1

NULL

NULL

700

The component hierarchy within a building is realized by the foreign key BUILDING_PARENT_ID which refers to the superordinate building (aggregate) and contains NULL, if such does not exist. This way, a tree-like structure arises also for building aggregates. BUILDING_PARENT_ID points at the predecessor in the tree. The foreign key BUILDING_ROOT_ID refers directly to the top level (root) of a building tree. In order to select all parts forming a building one only has to select those with the same BUILDING_ROOT_ID (cf. Table 3.11).

The meaning and the name of most fields are identical to those of the attributes in the UML diagram (cf. Fig. 3.7). Like for CityObjectGroups there are additional _CODESPACE columns for the attributes class, function and usage. A _CODESPACE column is also added for the roofType attribute as it is specified as gml:CodeType in CityGML. For every attribute including measure information like measuredHeight or storeyHeightsAboveGround etc. an additional _UNIT column is provided to specify the unit of measurement.

Geometry is represented by several foreign keys LOD0_FOOTPRINT_ID, LOD0_ROOFPRINT_ID, LODx_MULTI_SURFACE_ID (1 ≤ x ≤ 4), and LODx_SOLID_ID (1 ≤ x ≤ 4) which refer to entries in the SURFACE_GEOMETRY table and represent each LoD’s surface geometry.

Optionally the geometry of the terrain intersection curve is stored in the attribute LODx_TERRAIN_INTERSECTION (1 ≤ x ≤ 4) using database geometry type (see Table 3.12). Additional line-typed building elements such as antennas are optionally modelled by the attribute LODx_MULTI_CURVE (1 ≤ x ≤ 4, using the same database geometry like for terrain intersection curves).

Storage of composite line string geometry
Oracle
PostGIS
- SDO_GTYPE must have the type
MultiCurve/MultiLine, i.e. a composite
geometry of different line string segments
with 3D coordinates (SDO_GTYPE = 3006)

- SDO_ETYPE must be 1 (straight line segments)
as curved geometries are not allowed in CityGML
and SDO_INTERPRETATION must be 2

- Only MULTILINESTRING Z is allowed, i.e. a
composite geometry of different line string
segments with 3D coordinates

- The geometry type MULTICURVE is not used as
CityGML does not allow geometry with arcs

THEMATIC_SURFACE

The table THEMATIC_SURFACE represents thematic boundary features. CityGML class _BoundarySurface has a number of concrete subclasses representing different types of surfaces. One possibility would be to represent each of these classes by its own table. Here, we choose the approach to create one table representing all those classes. No own tables for the subclasses of _BoundarySurface were created in the relational schema; instead, the type of the boundary surface is given by the foreign key OBJECTCLASS_ID in the table THEMATIC_SURFACE. Allowed integer values:

  • 30 (CeilingSurface)
  • 31 (InteriorWallSurface)
  • 32 (FloorSurface)
  • 33 (RoofSurface)
  • 34 (WallSurface)
  • 35 (GroundSurface)
  • 36 (ClosureSurface)
  • 60 (OuterCeilingSurface)
  • 61 (OuterFloorSurface)

If a CityGML ADE is used that extends any of the classes named above, further values for OBJECTCLASS_ID may be added by the ADE manager. Their concrete numbers depend on the ADE registration (cf. Section 5.3.4).

The aggregation relation between buildings and the corresponding boundary surfaces results from the foreign key BUILDING_ID of the table THEMATIC_SURFACE which refers to the ID of the respective building. The same applies to references between surfaces of building installations (BUILDING_INSTALLATION_ID) and rooms (ROOM_ID). Thematic surfaces and the corresponding parent feature should share their geometry: the geometry should be defined only once and be used conjointly as XLinks. The SURFACE_GEOMETRY, which for example geometrically defines a roof, should at the same time be a part of the volume geometry of the parent feature the roof belongs to.

Example:

In Fig. 3.43, a building geometry is shown consisting of several surface geometries enclosing the outer building shell. Please note that the left wall (ID 5) is composed of two polygons (IDs 11 and 12) and that the roof is split into a left and a right part (IDs 20 and 21) each of which again consists of two polygons, the roof surface and an overhanging part. In the SURFACE_GEOMETRY table (cf. Table 3.13), the attribute IS_COMPOSITE is set to 1 for the tuples with IDs 5, 20 and 21 characterising them as composite surfaces. The surface geo­metries are semantically classified as roof, wall or ground surface by adding an entry into the THEMATIC_SURFACE table and linking this entry with the corresponding geometry tuple in SURFACE_GEOMETRY. In Table 3.14, an excerpt of the THEMATIC_SURFACE table is depicted. The tuple with ID 70 represents a RoofSurface by setting the OBJECTCLASS_ID attribute to the value 33. For its geometry, the tuple references ID 21 in the SURFACE_GEOMETRY table via the LOD2_MULTI_SURFACE_ID attribute.

_images/citydb_schema_lod2_building_roof_overhangs.png

LoD2 building with roof overhangs, highlighted in red

Excerpt of table SURFACE_GEOMETRY. Geometry objects are stored as database geometry datatype
ID
GMLID
PARENT_
ID
ROOT_
ID
IS_
SOLID
IS_
COMPO
SITE
IS_
XLINK
GEOMETRY
3
UUID_LoD2
NULL
3
0
0
0
NULL
5
Left_Wall
3
3
0
1
0
NULL
11
Left_Wall_1
5
3
0
0
0
Geometry
comp (5-1)
surface 11
12
Left_Wall_2
5
3
0
0
0
Geometry
comp (5-2)
surface 12
13
Front
3
3
0
0
0
Geometry
surface 13
14
Right_Wall
3
3
0
0
0
Geometry
surface 14
15
Back
3
3
0
0
0
Geometry
surface 15
16
Roof_part_1
21
3
0
0
1
Geometry
surface 16
17
Roof_part_2
20
3
0
0
1
Geometry
surface 17
18
Overhang_1
21
3
0
0
0
Geometry of
overhang 18
19
Overhang_2
20
3
0
0
0
Geometry of
overhang 19
20
Roof_right
3
3
0
1
0
NULL
21
Roof_left
3
3
0
1
0
NULL
30
UUID_Solid
NULL
30
1
0
0
NULL
31
UUID_CS
30
30
0
1
0
NULL
32
Roof_part_1
31
30
0
0
1
Geometry
surface 16
33
Roof_part_2
31
30
0
0
1
Geometry
surface 17
Excerpt of table THEMATIC_SURFACE (excerpt)
ID
OBJECTCLASS_ID
BUILDING_ID
ROOM_ID
LOD2_MULTI_
SURFACE_ID
70
33
1
NULL
21

In addition to thematic boundary surfaces, assume that we also want to represent the building volume as separate solid geometry that is stored with the building itself. For this purpose, another tuple with ID 30 is added to the SURFACE_GEOMETRY table whose IS_SOLID attribute is set to 1. This tuple is referenced from BUILDING using the LOD2_SOLID_ID attribute (cf. Table 3.15).

According to the CityGML specification, the surface geometries forming the solid geometry shall reference the geometries of the thematic boundary surfaces using GML’s XLink mechanism. Therefore, the referenced geometries have to be copied and inserted as new tuples into SURFACE_GEOMETRY. Moreover, the IS_XLINK flag has to be set to 1 for the referenced geometries and their copies (see Section 3.2.3 for details). In Table 3.13, this is illustrated for the geometries with ID 32 and 33, which are copies of the tuples with ID 16 and 17 respectively. Note, that the overhanging roof parts (IDs 18 and 19) are not referenced by the solid geometry, because they are dangling surfaces and not part of the volume.

Excerpt of table BUILDING (excerpt)
ID
BUILDING_ROOT_ID
LOD1_SOLID_ID
LOD2_SOLID_ID
1
1
NULL
30

BUILDING_INSTALLATION

The UML classes BuildingInstallation and IntBuildingInstallation are realized by the single table BUILDING_INSTALLATION. Internal and external objects are distinguished by the attribute OBEJCTCLASS_ID (external 27, internal 28). The relation to the corresponding parent feature arises from the foreign key BUILDING_ID or ROOM_ID, whereas the surface based geometry in LoD 2 to 4 is given via the foreign keys LODx_BREP_ID (2 ≤ x ≤ 4) referring to the table SURFACE_GEOMETRY.

Additional point- or line-typed building installation elements such as antennas can be modelled by the attribute LODx_OTHER_GEOM (2 ≤ x ≤ 4) using the database geometry type (any GTYPE, ETYPE etc. in Oracle and GEOMETRY Z in PostGIS). Since CityGML 2.0.0 building installations can also be represented by using prototypes which are stored as library objects implicitly. The information needed for mapping prototype objects to buildings consists of a base point geometry (LODx_IMPLICIT_REF_POINT (2 ≤ x ≤ 4)), a transfor­mation matrix (LODx_IMPLICIT_TRANSFORMATION (2 ≤ x ≤ 4)), which is stored as a string, and a foreign key reference to the IMPLICIT_GEOMETRY table (LODx_IMPLICIT_REP_ID (2 ≤ x ≤ 4)) where a reference to an explicit surface based geometry in LoD 2 to 4 is saved.

OPENING

Openings (CityGML class Opening) are represented by the table OPENING and are only allowed in LoD3 and 4. No individual tables are created for the subclasses. Instead, the differentiation is achieved by the foreign key OBJECTCLASS_ID which refers to the attribute ID of the (meta) table OBJECTCLASS. Valid integer values are 39 (Door) and 38 (Window). If a CityGML ADE is used that extends any of the two classes Door or Window, further values for OBJECTCLASS_ID may be added by the ADE manager. Their concrete numbers depend on the ADE registration (cf. Section 5.3.4).

Table OPENING_TO_THEM_SURFACE associates an opening ID in table OPENING with a thematic surface ID in table THEMATIC_SURFACE representing the m:n relation between both tables. An address can be assigned to a door (table OPENING) by the foreign key ADDRESS_ID in the table OPENING. Furthermore, addresses may be assigned to buildings (see table ADDRESS for detailed information).

Like with building installations openings can be modelled via implicit geometry since CityGML 2.0.0. Thus, the OPENING table does contain the columns LODx_IMPLICIT_REP_ID, LODx_IMPLICIT_REF_POINT and LODx_IMPLICIT_TRANSFORMATION, too.

ROOM

Room objects are allowed in LoD4 only. Therefore, the only keys LOD4_MULTI_SURFACE_ID and LOD4_SOLID_ID are referring to the table SURFACE_GEOMETRY. Additionally, the foreign keys to tables BUILDING and CITYOBJECT are necessary to map the relationship to these tables.

BUILDING_FURNITURE

As rooms may be equipped with furniture (chairs, wardrobes, etc.), a foreign key referencing to ROOM_ID is mandatory. The geometry of furniture objects can be described explicitly using the attribute LOD4_OTHER_GEOM representing the point- or line-typed entities or using the foreign key LOD4_BREP_ID referring to the table SURFACE_GEOMETRY. Alternatively, the geometry of furniture objects may be represented by using prototypes (ImplicitGeometry) which are stored as library objects. Again, the information needed for mapping prototype objects to rooms consists of a base point, a transformation matrix and a reference to the IMPLICIT_GEOMETRY table.

ADDRESS, ADDRESS_TO_BUILDING, and ADDRESS_SEQ

Addresses are realized by the table ADDRESS. The m:n relation with buildings arises from the table ADRESS_TO_BUILDING which associates a building ID and an address ID. An address can also be assigned to a door (table OPENING) by the foreign key ADDRESS_ID in the table OPENING. The same applies to addresses of bridges (incl. a table ADRESS_TO_BRIDGE) and bridge openings.

The next available ID for the table ADDRESS is provided by the sequence ADDRESS_SEQ.

Bridge schema

_images/citydb_schema_bridge_diagram.png

Bridge database schema

The bridge model, described in paragraph Section 3.1.6 at the conceptual level, is realised by the tables shown in Fig. 3.44. The relational schema is identical to the building schema for the most parts except for the naming. Please, refer to the explanation of the building schema on the previous pages for a complete understanding. The main differences to the building schema are the following:

  • Bridges cannot be modelled in LoD 0. Therefore, no corresponding columns appear in the BRIDGE table.
  • CityGML features belonging to bridges, such as boundary surfaces, installations, openings, rooms and furniture, are mapped to separate specific tables and are not stored in already existent ones (e.g. THEMATIC_SURFACE, OPENING, ROOM). Thus, values in OBJECTCLASS_ID columns are different as well. The reason for this is to provide a schema that is as close to the UML model as possible. There are slight differences between the building and the bridge model that would lead to ambiguous references e.g. a boundary surface of the building namespace cannot reference to a bridge construction element.
  • OBJECTCLASS_ID of table BRIDGE_THEMATIC_SURFACE allows the values:
    • 68 (BridgeCeilingSurface),
    • 69 (InteriorBridgeWallSurface)
    • 70 (BridgeFloorSurface),
    • 71 (BridgeRoofSurface),
    • 72 (BridgeWallSurface),
    • 73 (BridgeGroundSurface),
    • 74 (BridgeClosureSurface),
    • 75 (OuterBridgeCeilingSurface),
    • 76 (OuterBridgeFloorSurface).
If a CityGML ADE is used that extends any of the classes named above, further values for OBJECTCLASS_ID may be added by the ADE manager. Their concrete numbers depend on the ADE registration (cf. Section 5.3.4).
  • In the BRIDGE_INSTALLATION table external bridge installations can be identified by the OBEJCTCLASS_ID 65 and internal ones by 66.
  • The CityGML class BridgeConstructionElement is represented by the table BRIDGE_CONSTR_ELEMENT. Its schema is analogue to the BRIDGE_INSTALLATION table for the most parts. The relation to the corresponding bridge results from the foreign key BRIDGE_ID. Explicit and implicit geometry or a decomposition through boundary surfaces is possible. Additionally, terrain intersections curves of construction elements can also be stored.
  • The OBJECTCLASS_ID column in table BRIDGE_OPENING can be of integer value 79 (BridgeDoor) or 78 (BridgeWindow). They are associated to entries in the table BRIDGE_THEMATIC_SURFACE via the BRIDGE_OPEN_TO_THEM_SRF link table. If a CityGML ADE is used that extends any of the two classes BridgeDoor or BridgeWindow, further values for OBJECTCLASS_ID may be added by the ADE manager. Their concrete numbers depend on the ADE registration (cf. Section 5.3.4). Like openings of building, bridge openings can have addresses assigned to it.

CityFurniture schema

The CityGML feature class CityFurniture and its attributes specified in the UML diagram (Fig. 3.13) are directly mapped the CITY_FURNITURE table and its corresponding columns.

_images/citydb_schema_cityfurniture_diagram.png

CityFurniture database schema

The geometry of city furniture objects is represented either as a surface-based geometry object (LODx_BREP_ID, where 1 ≤ x ≤ 4) related to table SURFACE_GEOMETRY, as a point- or line-typed object (LODx_OTHER_GEOM, where 1 ≤ x ≤ 4) or as implicit geometry LODx_IMPLICIT_REP_ID, LODx_IMPLICIT_REF_POINT, LODx_IMPLICIT_TRANSFORMATION with 1 ≤ x ≤ 4). Optionally terrain intersection curves can be stored for city furniture objects.

Generics schema

3D city models will most likely contain attributes, which are not explicitly modelled in CityGML. Moreover, there may be 3D objects that are not covered by the thematic classes of CityGML. Generic objects and attributes help to support the storage of such data.

GENERIC_CITYOBJECT

For generic objects the full variety of different geometrical representations known from other tables is offered. Explicit (LODx_BREP_ID, LODx_OTHER_GEOM) and implicit geometry (LODx_IMPLICIT_REP_ID, LODx_IMPLICIT_REF_POINT, LODx_IMPLICIT_TRANS-FORMATION) as well as terrain intersection curves (LODx_TERRAIN_INTERSECTION) (all with 0 ≤ x ≤ 4).

_images/citydb_schema_generics_diagram.png

GenericCityObject and generic attributes database schema

CITYOBJECT_GENERICATTRIB, CITYOBJECT_GENERICATT_SEQ

The table CITYOBJECT_GENERICATTRIB is used to represent the concept of generic attributes. However, the creation of a table for every type of attribute was omitted. Instead a single table CITYOBJECT_GENERICATTRIB represents all types and the types are differentiated via the values of the attribute DATATYPE.

The table provides fields for every data type, but only one of those fields is relevant in each case. An overview of the meaning of the entries in the field DATATYPE is given in Table 3.16. The relation between the generic attribute and the corresponding CityObject is established by the foreign key CITYOBJECT_ID.

GenericAttribute type
DATATYPE
attribute type
1
STRING
2
INTEGER
3
REAL
4
URI
5
DATE
6
MEASURE
7
Group of generic attributes
8
BLOB
9
Geometry type
10
Geometry via surfaces in the table SURFACE_GEOMETRY

Note

Please note that the binary and geometric data types (incl. geometry via surfaces) are not supported by CityGML and cannot be exported using the CityGML Importer/Exporter tool.

Generic attributes can also be grouped using the CityGML class genericAttributeSet. Since genericAttributeSet itself is a generic attribute, it may also be contained in a generic attribute set facilitating a recursive nesting of arbitrary depth. This hierarchy within a genericAttributeSet is realized by the foreign key PARENT_GENATTRIB_ID which refers to the superordinate genericAttributeSet (aggregate) and contains NULL, if such does not exist. The foreign key ROOT_GENATTRIB_ID refers directly to the top level (root) of a genericAttributeSet tree. In order to select all generic attributes forming a genericAttributeSet one only has to select those with the same ROOT_GENATTRIB_ID.

The next available ID for the table CITYOBJECT_GENERICATTRIB is provided by the sequence CITYOBJECT_GENERICATT_SEQ.

LandUse schema

The CityGML feature class LandUse and its attributes specified in the UML (cf. Fig. 3.15) diagram are directly mapped the LAND_USE table and its corresponding columns. The relation to table SURFACE_GEOMETRY is established by the foreign keys LODx_MULTI_SURFACE_ID, where 0 ≤ x ≤ 4.

_images/citydb_schema_landuse_diagram.png

LandUse database schema

Digital terrain schema

A tuple in the table RELIEF_FEATURE represents a complex relief object, which consists of different relief components. It has an attribute LOD that describes the affiliation of the relief object to a certain level of detail (LoD) of the city model. The individual components of a complex relief object are stored in the tables BREAKLINE_RELIEF, TIN_RELIEF, MASSPOINT_RELIEF and RASTER_RELIEF. Every relief component has an attribute LOD that describes the affiliation to a certain level of detail (resolution, accuracy). However, individual components of a complex relief object may belong to different LoD and may be heterogeneous, i.e. a mixture of TINs, grids and mass points. Optionally, the geometrical separation between the individual relief components of a complex relief object can be realized via polygons (attribute EXTENT), which specify the validity area of the relief component. Every relief component has an attribute NAME that is used for naming of the component. The relief as well as every relief component are derived from CITYOBJECT and receive the same ID as the CityObject. Table RELIEF_FEAT_TO_REL_COMP represents the interrelationship between relief features and relief components.

_images/citydb_schema_relief_diagram.png

Digital Terrain Model database schema

A raster relief is the only feature in CityGML that can be described by a grid coverage. Corresponding database types are SDO_GEORASTER in Oracle Spatial 11g or higher (not available in Oracle Locator) and RASTER in PostGIS 2.0 or higher. In Oracle for each table that stores SDO_GEORASTER an additional table of type SDO_RASTER is mandatory (raster data table = RDT). It stores the metadata of the SDO_GEORASTER.

In case of that a grid representation is introduced to other features in CityGML in the future, numerous RDT tables would be created when storing grids along with the thematic tables. Thus, a central table called GRID_COVERAGE is used to register all grid data and to prevent numerous additional tables in the 3DCityDB schema. This concept is analogue to the storage of surface-based geometry whereas SURFACE_GEOMETRY is the central table.

Since Oracle Spatial 11g the SDO_GEORASTER type supports Oracle Workspace Manager (cf. [Murr2010]). Therefore, the table GRD_COVERAGE_RDT can be versioned for history management. However, Oracle Spatial doesn’t allow user to version-enable the tables, where GeoRaster objects are stored. Hence, the table GRID_COVERAGE cannot be versioned using the Oracle Workspace Manager.

Geometry attributes for different relief components are limited to these value domains:

BREAKLINE_RELIEF

  • BREAK_LINES and RIDGE_OR_VALLEY_LINES
    • Oracle: MultiLine (GTYPE 3006)
    • PostGIS: MultiLineString Z

TIN_RELIEF

  • STOP_LINES and BREAK_LINES
    • Oracle: MultiLine (GTYPE 3006)
    • PostGIS: MultiLineString Z
  • RELIEF_POINTS
    • Oracle: MultiPoint (GTYPE 3001 or 3005)
    • PostGIS: MultiPoint Z
  • TIN
    • TIN triangles could be stored as triangulated surfaces in table SURFACE_GEOMETRY

MASSPOINT_RELIEF

  • RELIEF_POINTS
    • Oracle: MultiPoint (GTYPE 3001 or 3005)
    • PostGIS: MultiPoint Z

RELIEF_COMPONENT

  • EXTENT (defines the validity extents of each relief component)
    • Oracle: Polygon (GTYPE 3003, ETYPE 1003, SDO_ INTERPRETATION 1 or 3 (optimized rectangle))
    • PostGIS: Polygon Z

Transportation schema

For the realisation of transportation objects two tables are provided: TRAFFIC_AREA and TRANSPORTATION_COMPLEX.

TRAFFIC_AREA

Next to the common attribute triple class, function and usage traffic areas can store information about their surfaceMaterial. In the UML model this attribute is specified as gml:CodeType which makes an additional _CODESPACE column necessary. The representation of geometry is handled by foreign keys LODx_MULTI_SURFACE_ID (with 2 ≤ x ≤ 4). The aggregation relation between a transportation complex and the corresponding traffic areas results from the foreign key TRANSPORTATION_COMPLEX_ID. The foreign key OBJECTCLASS_ID indicates whether a tuple represents a TrafficArea (value 47) or an AuxiliaryTrafficArea (value 48) feature. If a CityGML ADE is used that extends any of the two classes TrafficArea or AuxiliaryTrafficArea, further values for OBJECTCLASS_ID may be added by the ADE manager. Their concrete numbers depend on the ADE registration (cf. Section 5.3.4).

TRANSPORTATION_COMPLEX

As shown in the UML diagram, every traffic area object may have the attributes class, function and usage. For differentiation between the subclasses an OBJECTCLASS_ID column is used again:

  • 42 (TransportationComplex)
  • 43 (Track)
  • 44 (Railway)
  • 45 (Road)
  • 46 (Square)

If a CityGML ADE is used that extends any of the classes named above, further values for OBJECTCLASS_ID may be added by the ADE manager. Their concrete numbers depend on the ADE registration (cf. Section 5.3.4).

In the coarsest level transportation complexes are modelled by line objects. The corresponding column is called LOD0_NETWORK of geometry type MultiCurve in Oracle and MultiLineString Z in PostGIS. Starting form LOD1 the representation of object geometry is handled by foreign keys LODx_MULTI_SURFACE_ID (with 1 ≤ x ≤ 4).

_images/citydb_schema_transportation_diagram.png

Transportation database schema

Tunnel schema

_images/citydb_schema_tunnel_diagram.png

Tunnel database schema

The tunnel model, described in Section 3.1.12 at the conceptual level, is realised by the tables shown in Fig. 3.50. The relational schema is identical to the building and bridge schema for the most parts except for the naming. Please, refer to the explanation of the building schema on the previous pages for a complete understanding. The main differences to the building schema are the following:

  • Tunnels cannot be modelled in LoD 0. Therefore, no corresponding columns appear in the TUNNEL table.
  • The CityGML feature HollowSpace can be seen analogue to the feature Room of a building or a bridge
  • CityGML features of tunnels, such as boundary surfaces, installations, openings, hollow spaces and furniture, are mapped to separate specific tables and are not stored in already existent ones (e.g. THEMATIC_SURFACE, OPENING). The reason for this is to provide a schema that is as close to the UML model as possible. There are slight differences between the building and the tunnel model that would lead to ambiguous references e.g. a boundary surface of the building namespace cannot reference to a tunnel feature.
  • OBJECTCLASS_ID of table TUNNEL_THEMATIC_SURFACE allows the values:
    • 89 (TunnelCeilingSurface),
    • 90 (InteriorTunnelWallSurface)
    • 91 (TunnelFloorSurface),
    • 92 (TunnelRoofSurface),
    • 93 (TunnelWallSurface),
    • 94 (TunnelGroundSurface),
    • 95 (TunnelClosureSurface),
    • 96 (OuterTunnelCeilingSurface),
    • 97 (OuterTunnelFloorSurface).
  • In the TUNNEL_INSTALLATION table external tunnel installations can be identified by the OBJECTCLASS_ID 86 and internal ones by 87.
  • The OBJECTCLASS_ID column in table BRIDGE_OPENING can be of integer value 100 (BridgeDoor) or 99 (BridgeWindow). They are associated to entries in the table TUNNEL_THEMATIC_SURFACE via the TUNNEL_OPEN_TO_THEM_SRF link table.
  • If a CityGML ADE is used that extends any of the named classes above, further values for OBJECTCLASS_ID may be added by the ADE manager. Their concrete numbers depend on the ADE registration (cf. Section 5.3.4).
  • In contrast to the building model tunnels and tunnel openings do not have addresses.

Vegetation schema

The vegetation model specified in Section 3.1.13 is realized by the tables shown in Fig. 3.51 which correspond largely to the UML model.

_images/citydb_schema_vegetation_diagram.png

Vegetation database schema

SOLITARY_VEGETAT_OBJECT

The attributes class, function, usage, species, height, trunkDiameter, and crownDiameter describe single vegetation objects. The attribute species is of type gml:CodeList in CityGML that can be referenced to a certain codespace. Therefore, another _CODESPACE column is provided in the SOLITARY_VEGETAT_OBJECT table. Similar to the building table attribute with measure information can optionally be coupled with a reference to the used measuring scale by an additional _UNIT column.

The geometry of the vegetation can either be described explicitly using the attribute LOD4_OTHER_GEOM or LOD4_BREP_ID or implicitly using a foreign key relation the IMPLICIT_GEOMETRY table including a reference point and optionally a transformation matrix (LODx_IMPLICIT_REP_ID, LODx_IMPLICIT_REF_POINT LODx_IMPLICIT_TRANSFORMATION, with 1 ≤ x ≤ 4).

PLANT_COVER

Information on vegetation areas are contained in attributes usage, class, function, and averageHeight. There is also a _UNIT column to specify the scale the averageHeight values are based on. The geometry is restricted to a MultiSurface or (and this is unique for PlantCover features) a MultiSolid, represented respectively by the foreign keys LODx_MULTI_SURFACE_ID (with 1 ≤x ≤ 4) and LODx_MULTI_SOLID_ID which refer to the SURFACE_GEOMETRY table.

WaterBody schema

WATERBODY, WATERBOD_TO_WATERBND_SRF

The modelling of the WATERBODY database schema corresponds largely to the respective UML model. For LoD0 and LoD1 additional attributes are added, e.g. for modelling river geometry (LODx_MULTI_CURVE).

The geometries of LOD0 and LOD1 areal water bodies are stored within the table SURFACE_GEOMETRY. The foreign keys LODx_MULTI_SURFACE_ID (with 0 ≤ x ≤ 1) refer to the corresponding rows. Geometry for water filled volumes is handled in a similar way using foreign keys LODx_SOLID_ID (with 1 ≤ x ≤ 4).

For mapping the boundedBy aggregation which identifies the water body’s exterior shell managed by the WATERBOUNDARY_SURFACE table, the additional table WATERBOD_TO_WATERBND_SRF is needed to realise the m:n relationship.

WATERBOUNDARY_SURFACE

The exterior shell of a WaterBody can be differentiated semantically using features of the type _WaterBoundarySurface. These features are stored in the WATERBOUNDARY_SURFACE table and can be distinguished by the OBJECTCLASS_ID attribute:

  • 11 (WaterSurface)
  • 12 (WaterGroundSurface)
  • 13 (WaterClosureSurface)

If a CityGML ADE is used that extends any of the named classes above, further values for OBJECTCLASS_ID may be added by the ADE manager. Their concrete numbers depend on the ADE registration (cf. Section 5.3.4).

Since every _WaterBoundarySurface object must have at least one associated surface geometry, the foreign keys LODx_SURFACE_ID (with 2 ≤x ≤ 4, no MultiSurface here) are used to realise these relations.

_images/citydb_schema_waterbody_diagram.png

WaterBody database schema

Sequences

Fig. 3.53 lists predefined sequences from which multiple users may generate unique integers for primary keys automatically. Sequences help to coordinate primary keys across multiple rows and tables. For instance, the ID values of the BUILDING table are generated from the CITYOBJECT_SEQ sequence. The sequences are defined to start with 1 and to be incremented by 1 when a sequence number is generated. It is highly recommended to generate ID values for all tables by using the predefined sequences only.

The sequence GRID_COVERAGE_RDT_SEQ does not exist in the PostgreSQL version as the corresponding table does not exist.

_images/citydb_schema_sequences_diagram.png

Overview of all sequences used in 3DCityDB

Managing ADE schemas

Logical 3DCityDB schema modules

The 3D City Database schema can be dynamically extended with user-defined schemas for storing CityGML ADE data. Every additional ADE schema is registered in special tables of the 3DCityDB schema that hold relevant metadata about the ADE itself. The feature and object classes defined in the ADE are added to the OBJECTCLASS table.

Logically, all tables within a 3D City Database instance can therefore be grouped into three modules:

  1. Core Data Module containing the core tables for storing CityGML data,
  2. Metadata Module containing tables for the registration of ADEs, and
  3. Dynamic Data Module containing the tables for storing the actual ADE data.

Note

ADE support has been introduced with version 4.0 of the 3D City Database. The database schema of previous versions of the 3D City Database basically consists of the tables in the Core Data Module.

The relations between the modules are shown in the following figure.

_images/citydb_conceptual_database_structure.png

New conceptual 3DCityDB database structure for handling CityGML ADEs

The green tables enclosed in the Core Data Module represent those database tables that are responsible for storing the standard CityGML features such as Building, Transportation, Tunnel, CityFurniture, CityObjectGroup, Generic, Appearance etc, which have been introduced and discussed in the previous sections of this chapter.

For every CityGML ADE, an additional set of database tables for storing the ADE data is dynamically added to the Dynamic Data Module (pink tables in the figure). In addition, relationships between model classes defined in the ADEs and classes from CityGML such as generalization/specialization and associations are adequately reflected using database foreign key constraints which allow to ensure the data integrity and consistency within the database system.

The Metadata Module is used for storing the relevant meta-information (e.g. the XML namespaces, schema files, and class affiliations etc.) about ADEs as well as the referencing relations among the ADE and CityGML application schemas. This way, the dependencies between the registered ADE application schemas can be directly read from the 3DCityDB database schema to facilitate the database administration process, i.e. the registration and de-registration of multiple CityGML ADEs within a 3DCityDB instance.

ADE metadata model

An overview of the relational structure of the Metadata Module is shown in Fig. 3.55. The table ADE serves as a central registry for all the registered CityGML ADEs each of which corresponds to a table row and the relevant ADE metadata attributes are mapped onto the respective columns. For example, each registered ADE shall own a globally unique ID value for identification purpose. This ID value could be a UUID (Universally Unique Identifier) that can be automatically generated and stored in the column ADEID while registering the ADE. The columns NAME and DESCRIPTION are mainly used for storing a basic description about each ADE. The column VERSION denotes the version number of an ADE and allows to distinguish different release versions.

In the 3DCityDB database schema, the database objects like tables, indexes, foreign key constrains, and sequences of a certain ADE shall be named by starting with a unique prefix stored that is stored in the DB_PREFIX column. This allows applications to easily retrieve the database schema of a certain ADE using a wildcard filter. This way, it is possible to automatically perform statistics on the ADE data contents stored in the individual tables. In addition, the column XML_SCHEMAMAPPING_FILE is used to store the XML-formatted schema mapping information of each ADE and is hence defined with the CLOB data type. Another CLOB-typed column is DROP_DB_SCRIPT, which stores the SQL statements for dropping the individual ADE database schema. To remove an ADE from the 3DCityDB, this script can be easily retrieved and executed at the database side.

Moreover, the CREATION_DATE and CREATION_PERSON are two application-specific attribute columns for providing the information about when and by whom an ADE was registered in the 3DCityDB. This meta-information is typically helpful for 3DCityDB users to accomplish the administration work e.g. searching and cleaning up those ADEs that are outdated or registered by certain database users.

_images/citydb_schema_metadata_diagram.png

Technical implementation of the 3DCityDB Metadata Module in a relational diagram

A CityGML ADE may consist of multiple application schemas one of which should be the root schema referencing the others. Such dependency information along with the meta-information about the individual schema(s) are stored in two tables, namely SCHEMA and SCHEMA_REFERENCING. The SCHEMA_REFERENCING table is an associative table which contains two foreign key columns REFERENCED_ID and REFERENCING_ID to link the respective referencing and referenced schemas. In the table SCHEMA, the flag attribute IS_ADE_ROOT is used for denoting the root schema that directly or indirectly references all the other ADE schemas of an ADE. In this way, the dependency hierarchy of the ADE schemas can be fully represented in a relational model to facilitate the reconstruction of the original schema relations through user applications. For each schema, its meta-information such as the schema location, namespace, namespace prefix, source XML schema definition file, as well as the file type (e.g. plain XML text or archived) of the schema can also be stored in the further columns of the SCHEMA table. The column CITYGML_VERSION refers to CityGML version for which the ADE has been defined.

The table OBJECTCLASS is a central registry for enumerating not only the standard CityGML classes but also the classes of the registered ADEs. For this reason, it has been logically moved from the Core Data Module into the Metadata Module. Each class is assigned a globally unique numeric ID in OBJECTCLASS for querying and accessing the class-related information. As explained in the Section 3.2.1.2, the ID values ranging from 0 to 113 used for the standard CityGML classes.

Important

To be able to implement future changes for the 3DCityDB schema, the ID value range 0 - 9999 is preserved for the core CityGML schema and shall not be used for ADE classes.

In order to avoid ID clashes between ADEs, each ADE shall own a certain value range which can be centrally maintained and organized by an official community like the 3DCityDB group. The OBJECTCLASS table also contains a few additional columns like the IS_ADE_CLASS which is a flag to mark and easily identify those classes belonging to ADEs. Another column named TABLENAME refers to the table name of a CityGML or ADE class and provides the basic information about model mapping. The last two columns SUPERCLASS_ID and BASECLASS_ID are two foreign key columns of the ID column for representing the inheritance hierarchy of all the CityGML and ADE classes in a relational structure.

In addition to the inheritance relationship mapped in OBJECTCLASS, the aggregation relationship between CityGML and ADE classes can also be represented within a 3DCityDB instance by means of the table AGGREGATION_INFO. Its first two columns CHILD_ID and PARENT_ID are two foreign key columns which point to the primary key column of the table OBJECTCLASS to reflect the two related classes. The aggregation or composition relationship between each pair of classes can be distinguished by using the flag attribute IS_COMPOSITE whose value can either be 0 (aggregation) or 1 (composition). In 3DCityDB, each aggregation/composition is logically mapped onto a foreign key column or an associative table for joining the two respective class tables. This meta-information can also be stored in the table AGGREGATION_INFO using its column JOIN_TABLE_OR_COLUMN_NAME. In addition, the multiplicity of the individual aggregation/composition are stored in the two numeric columns MIN_OCCURS and MAX_OCCURS. In case of a 0..* relationship where the value of the multiplicity end is unbounded, the value in the column MAX_OCCURS shall be set NULL.

The figures for the entity-relationship diagrams are taken from Oracle JDeveloper (v12.2.1), which is used to model the database schema and extract SQL DDL scripts automatically for Oracle databases. It is a freeware IDE by Oracle and can be downloaded at https://www.oracle.com/application-development/technologies/jdeveloper.html.

For PostgreSQL databases, the open source tool pgModeler (v0.8.2) is used to maintain the schema. The source code is available on GitHub at https://github.com/pgmodeler/pgmodeler. Pre-built binaries can be purchased from the official website at https://www.pgmodeler.io/.

Starting from version 3.0.0 of the 3DCityDB, the corresponding schema modelling projects are shipped with the release and can be edited by the user to create customized SQL scripts. However, note that the 3DCityDB Importer/Exporter tool only supports the default schema, unless it is not reprogrammed against the user’s new database schema.

Definition of the CRS for a 3D City Database instance

The definition of the CRS of a 3D City Database instance consists of two components: 1) a valid Spatial Reference Identifier (SRID, typically the EPSG code) and 2) an OGC GML conformant definition identifier for the CRS. Both components are defined during the database setup (see Section 1.3) and are further stored in the table DATABASE_SRS (see Fig. 3.55).

The SRID is an integer value key pointing to spatial reference information within the SPATIAL_REF_SYS table (PostGIS) or the MDSYS.CS_SRS table (Oracle). Both DBMSs are shipped with a large number of predefined spatial reference systems.

Note

When defining the default SRID during setup of the 3D City Database instance, the chosen value must already exist in the mentioned tables.

The GML conformant CRS definition identifier should follow the OGC recommendation for the Universal Resource Name (URN) encoding of CRSs given in the OGC Best Practice Paper Definition identifier URNs in OGC namespace [Whit2009]. At setup time, please make sure to provide a URN value which corresponds to the spatial reference system identified by the default SRID of the database instance. Since CityGML is a 3D standard, the URN encoding should always represent a three-dimensional CRS.

Note

The CRS definition identifier is used as value for the gml:srsName attribute on GML geometry elements when exporting data in CityGML format. Software consuming the exported data will rely on this information to be able to automatically apply the correct spatial reference. So please make sure that the CRS identifier is correct. The identifier is, however, neither required nor evaluated when executing spatial operations inside the 3DCityDB itself.

An identifier for a three-dimensional CRS can, for example, be denoted as compound coordinate reference systems according to [Whit2009]. The general syntax of a URN encoding for a compound reference system is as follows:

urn:ogc:def:crs,crs:authority:version:code,crs:authority:version:code

Authority, version, and code depend on the information authority providing the CRS definition (e.g. EPSG or OGC). The following example shows a possible combination of an SRID (here referring to a 2D CRS) and CRS URN encoding (3D) to set up an instance of the 3D City Database:

SRID: 31466
URN: urn:ogc:def:crs,crs:EPSG:7.7:31466,crs:EPSG:7.7:5783

The example SRID is referencing a Projected CRS defined by EPSG (DHDN / 3-degree Gauss-Krüger zone 2; used in the western part of Germany; EPSG-Code: 31466). The URN encodes a compound coordinate reference system which adds a Vertical CRS as height reference (DHHN92 height, EPSG-Code: 5783).

Note

The 3DCityDB is shipped with a database script that allows you to change the SRID and/or the CRS definition identifier at any time for a given 3DCityDB instance (see Section 3.5.5). This functionality is helpful, for instance, in case the 3DCityDB was set up with a wrong SRID by mistake that does not match the imported data. You can quickly change the SRID so that spatial functions of the database work correctly. Or you can even use this functionality to reproject an entire 3DCityDB instance to a new CRS.

The Importer/Exporter also offers a convenient way to execute this script via its graphical user interface (see Section 4.3.4).

Working with multiple database schemas

Most users rarely work with only one 3D City Database. They maintain multiple instances for each data set, for different city projects or user groups and probably for various test demos. The ability to manage CityGML ADEs sets the ground for even more experiments. This chapter explains how to manage multiple 3D City Databases in separate schemas.

Create and address database schemas

Databases and schemas in PostgreSQL

PostgreSQL provides a clustering concept for database schemas that allows users to group multiple instances of the 3D City Database. This means within one database object a user can create more schemas like the default ‘citydb’ schema, all of which contain the table layout of the 3D City Database. They can be regarded as separate namespaces. To address the different namespaces, dot notation should be used in queries. Note, if tables are not schema-qualified the first namespace in the database search path (see Section 1.3.3) that contains the tables will be used. One advantage of using multiple schemas instead of many databases is the ability to join tables from different namespaces. Cross-database queries are not directly possible in PostgreSQL (see postgres_fdw extension).

To create an additional 3D City Database instance within a given database run the CREATE_SCHEMA shell script and define a name for the new schema. The new instance will obtain the CRS from the ‘citydb’ schema, which can be changed later (see chapter Section 3.5.5). To drop a schema, call the DROP_SCHEMA shell script.

Oracle user schemas

In Oracle, schemas are bound to one user. All user schemas belong to one database. There is no clustering concept like in PostgreSQL, so a CREATE_SCHEMA script would not make too much sense. In fact, a new instance should be created with a new user and the CREATE_DB script. Like with PostgreSQL schemas, it is possible to join tables from different user namespaces if sufficient privileges were granted (see next section).

As another alternative, Oracle databases can be set under version control with the Oracle Workspace Manager so that a user can also work with multiple versions of a city model in separate workspaces. To change the workspace a user must execute the DBMS_WM.GotoWorkspace procedure. Versioning can either be enabled during setup of the 3D City Database with the CREATE_DB shell script, or at any time later by invoking the SQL scripts ENABLE_VERSIONING.sql and DISABLE_VERSIONING.sql using a database client such as SQL*Plus.

Read and write access to a schema

A shell script called GRANT_ACCESS is provided to grant either READ-ONLY (RO) or READ-WRITE (RW) access rights to a 3D City Database instance. The user who acts as the grantor must be specified in the CONNECTION_DETAILS file. The user name of the grantee must be entered when executing the script.

Read-only access rights

Granting only read access is useful if you want to protect your data from unauthorized or accidental modification. This is the default setting in the GRANT_ACCESS script. Read-only users will be allowed to:

  • connect to the given database schema and use its objects (tables, views, sequences, types etc.),
  • export data in CityGML, CityJSON and KML/COLLADA formats,
  • generate database reports, query the index status and calculate envelopes.

But they can neither import new data into the 3DCityDB nor alter the data already stored in the tables in any way (incl. updating envelopes, dropping and creating indexes).

Read and write access rights

By choosing the RW option in the GRANT_ACCESS script the grantee will also be able to perform UPDATE and DELETE operations against the schema content. This is especially useful for Oracle users, who want to manage different database schemas with primarily one user. In PostgreSQL however, one user can be the owner of multiple schemas. Still, write access can be interesting in a multi-editor scenario.

Note

Dropping and creating indexes is not possible in PostgreSQL if you’re not the owner of the table.

Revoke access

Like with the GRANT_ACCESS script, access rights can also be revoked, of course. Simply call the REVOKE_ACCESS script and enter the user name of the grantee and the schema name from which the rights shall be revoked from.

Schema support in stored procedures

Since v3.0.0, most stored procedures of the 3D City Database offer an input argument to specify the schema name against which the operation will be executed. The default for PostgreSQL is `citydb`, for Oracle it is the schema of the currently connected user. Since v4.0.0, this parameter has been removed for those type of stored procedures that operate on the logical level of the database, because managing different ADEs in separate schemas can result in a different table structure. E.g. one central delete script is not guaranteed to work against every schema. Thus, for PostgreSQL these procedures are now part of an instance schema such as ‘citydb’ (see also Section 3.5). Instead of calling a delete function from the central ‘citydb_pkg’ schema like this:

SELECT citydb_pkg.delete_cityobject(1, 'my_schema');

you now have to schema-qualify the function itself:

SELECT my_schema.delete_cityobject(1);

In Oracle, every stored procedure could be called this way, as every user schema stores the PL/SQL packages.

Stored procedures and additional features

The 3D City Database is shipped with a set of stored procedures referred to as the CITYDB package (formerly known as the GEODB package in v2.x). They are automatically installed during the setup procedure of the 3D City Database. For the Oracle version, it comprises of eight PL/SQL packages. In the PostgreSQL version, functions are written in PL/pgSQL and stored either in their own database schema called ‘citydb_pkg’ or as part of an instance schema like ‘citydb’. Many of these functions and procedures expose certain tasks on the database side to the Importer/Exporter tool. When calling stored procedures, the package name has to be included for the Oracle version. With PostgreSQL, the ‘citydb_pkg’ schema has not to be specified as prefix since it is put on the database search path during setup.

_images/citydb_graphical_database_clients.png

Graphical database client connected to the 3D City Database (left: SQL Developer (Oracle), right: pgAdmin 4 (PostgreSQL)

User-defined data types

The Oracle version defines a set of user-defined data types that are used by functions from the PL/SQL packages. They are not necessary in PostgreSQL, because of how it deals with arrays and returns of multiple variables.

  • STRARRAY, a nested table of the data type VARCHAR2
  • ID_ARRAY, a nested table of the data type NUMBER
  • DB_VERSION_OBJ, an object that bundles version information of the installed 3D City Database instance
  • DB_VERSION_TABLE, a nested table of DB_VERSION_OBJ
  • DB_INFO_OBJ, an object that bundles metadata of the used reference system
  • DB_INFO_TABLE, a nested table of DB_INFO_OBJ

The definition of the data types can be found in the SQL file for the CITYDB_UTIL package.

CITYDB_UTIL

The CITYDB_UTIL package can be seen as a container for various single utility functions. If further releases will bring more stored procedures with similar functionality some of them will probably be outsourced in their own package (like CITYDB_CONSTRAINT in v4.0). Nearly all functions take the schema name as the last input argument (“schema-aware”). Therefore, they can be executed against another user schema in Oracle or database schema in PostgreSQL. Note, for the function get_seq_values the schema name must be part of the first argument – the sequence name, e.g. my_schema.cityobject_seq.

Here is overview on API of the CITYDB_UTIL package in Oracle:

API of the CITYDB_UTIL package for Oracle
Function
Return Type
Explanation
citydb_version ()
DB_VERSION_TABLE
Returns version information of the
currently installed 3DCityDB
construct_solid (geom_root_id)
SDO_GEOMETRY
Tries to construct a solid geometry
based on a given root_id value in
SURFACE_GEOMETRY table
db_info (schema_name)
3 OUT variables
Returns three columns: schema_srid
INTEGER, schema_gml_srs_name
VARCHAR2, versioning VARCHAR2
db_metadata (schema_name)
DB_INFO_TABLE
Returns a set of 3DCityDB metadata
drop_tmp_tables (schema_name)
void
Drop existing temporal tables
get_id_array_size (ID_ARRAY)
NUMBER
Returns the size of an ID_ARRAY
nested table
get_seq_values (seq_name,
seq_count)
ID_ARRAY
Returns the next k values of a
given sequence
min (NUMBER, NUMBER)
NUMBER
Returns the smaller of two given
numbers
sdo2geojson3d
(SDO_GEOMETRY,
decimal_places, compress_tags,
relative2mbr)
CLOB
Returns a given geometry into a
3D GeoJSON character object
split (VARCHAR2, delimiter)
STRARRAY
Splits a String based on a given
delimiter into a STRARRAY object
ST_Affine (SDO_GEOMETRY,
row1col1, row1col2, row1col3,
row2col1, row2col2, row2col3,
row3col1, row3col2, row3col3,
row1col4, row2col4, row3col4)
SDO_GEOMETRY
Performs an affine transformation
on a given geometry a given 3x3
matrix plus 3 offset values
string2id_array (VARCHAR2,
delimiter)
ID_ARRAY
Transforms a String into an
ID_ARRAY with a given delimiter
to_2d (SDO_GEOMETRY, srid)
SDO_GEOMETRY
Returns a geometry without Z values
versioning_db (schema_name)
VARCHAR2
Returns either ‘ON’ or ‘OFF’
versioning_table (table_name,
schema_name)
VARCHAR2
Returns either ‘ON’ or ‘OFF’

The PostgreSQL API includes less functions, as some functionality is provided by the PostGIS extension, such as ST_AsGeoJSON, ST_Affine and ST_Force2D. Returning multiple variables is always performed with OUT variables.

API of the CITYDB_UTIL package for PostgreSQL
Function
Return Type
Explanation
citydb_version ()
4 OUT variables
Returns version information of the
currently installed 3DCityDB
db_info (schema_name)
3 OUT variables
Returns three columns: schema_srid
INTEGER, schema_gml_srs_name
TEXT, versioning TEXT
db_metadata (schema_name)
6 OUT variables
Returns six variables: schema_srid
INTEGER, schema_gml_srs_name TEXT,
coord_ref_sys_name TEXT,
coord_ref_sys_kind TEXT,
wktext TEXT, versioning TEXT
drop_tmp_tables (schema_name)
void
Drop existing temporal tables
get_seq_values (seq_name,
seq_count)
SETOF INTEGER
Returns the next k values of a
given sequence
Min (NUMERIC, NUMERIC)
NUMERIC
Returns the smaller of two given
numbers
versioning_db (schema_name)
TEXT
Returns ‘OFF’
versioning_table (table_name,
schema_name)
TEXT
Returns ‘OFF’

CITYDB_CONSTRAINT

The CITYDB_CONSTRAINT packages includes stored procedures to define constraints or change their behavior. A user can temporarily disable certain foreign key relationships between tables, e.g. the numerous references to the SURFACE_GEOMETRY table. The constraints are not dropped. While it comes at the risk of data inconsistency it can improve the performance for bulk write operations such as huge imports or the deletion of thousands of city objects.

It is also possible to change the delete rule of foreign keys from ON DELETE NO ACTION (use ‘a’ as input) to ON DELETE SET NULL (‘n’) or ON DELETE CASCADE (‘c’). Switching the delete rule will remove and recreate the foreign key constraint. The delete rule does affect the layout of automatically generated delete scripts as no explicit code is necessary in case of cascading deletes. However, we do not recommend to change the behavior of existing foreign key relationships because some delete operations might not work properly anymore. For Oracle databases, there is an additional procedure to define spatial metadata for single geometry column. All functions are schema-aware and their return type is void.

API of the CITYDB_CONSTRAINT package for Oracle
Function
Explanation
set_column_sdo_metadata
(geom_column_name, dimension, srid,
table_name, schema_name)
Inserts a new entry in the USER_SDO_GEOM_METADATA
view for a given geometry column
set_enabled_fkey (fkey_name,
table_name, BOOLEAN,
schema_name)
Disables / enables a given foreign key constraint
set_enabled_geom_fkeys (BOOLEAN,
schema_name)
Disables / enables all foreign key constraints that
reference the SURFACE_GEOMETRY table
set_enabled_schema_fkeys (BOOLEAN,
schema_name)
Disables / enables all foreign key constraints
within a given user schema
set_fkey_delete_rule (fkey_name,
table_name, column_name, ref_table,
ref_column, on_delete_param,
schema_name)
Changes the delete rule of a given foreign key
constraint
set_schema_fkey_delete_rule
(on_delete_param, schema_name)
Changes the delete rule of all foreign key
constraint within a given user schema
set_schema_sdo_metadata
(schema_name)
Inserts new entries in the USER_SDO_GEOM_METADATA
view for all geometry columns of a given schema
(some expections)

There is only one significant difference in the API in PostgreSQL. Instead of specifying the name, table and schema of a foreign key, the OID of the corresponding integrity trigger is enough. This is because there is no ALTER TABLE command in PostgreSQL to disable foreign keys.

Notable difference in the API of the CITYDB_CONSTRAINT package for PostgreSQL
Function
Explanation
set_enabled_fkey (fkey_trigger_oid, BOOLEAN)
Disables / enables a foreign key constraint trigger

CITYDB_IDX

The package CITYDB_IDX provides functions to create, drop, and check both spatial and non-spatial indexes on tables of the 3D City Database by using a user-defined data type called INDEX_OBJ. In the Oracle version, the data type offers three member functions to construct an INDEX_OBJ. In the PostgreSQL version, these are just separate functions within the ‘citydb_pkg’ schema:

  • construct_spatial_3d for a 3-dimensional spatial index
  • construct_spatial_2d for a 2-dimensional spatial index
  • construct_normal for a normal B-tree index

The easiest way to take use of this package is by using the Importer/Exporter (see Section 4.3.3), which provides an interface for enabling and disabling indexes (ON and OFF). Disabling spatial indexes can accelerate some operations such as bulk imports, deletion of many objects, and migration of data from a 3D City Database v2.1.0 instance to version 4. The methods used by the Importer/Exporter iterate over the entries in the INDEX_TABLE table which is part of the database schema. In order to include more indexes the user need to insert their metadata into INDEX_TABLE. The differences between Oracle and PostgreSQL only apply to different data types. Instead of STRARRAY an array of TEXT is used as return type.

API of the CITYDB_IDX package for Oracle
Function
Return Type
Explanation
create_index (INDEX_OBJ,
is_versioned, schema_name)
VARCHAR2
Creates a new index based on the metadata of the
input INDEX_OBJ. Returns a text status.
create_normal_indexes
(schema_name)
STRARRAY
Creates indexes for all normal indexes to be
found in INDEX_TABLE. Returns an array of
status reports.
create_spatial_indexes
(schema_name)
STRARRAY
Creates indexes for all spatial indexes to be
found in INDEX_TABLE. Returns an array of
status reports.
drop_index (INDEX_OBJ,
is_versioned, schema_name)
VARCHAR2
Drops an index that matches the metadata of
the input INDEX_OBJ. Returns a text status.
drop_normal_indexes
(schema_name)
STRARRAY
Drops indexes that match all normal indexes
to be found in INDEX_TABLE. Returns an array
of status reports.
drop_spatial_indexes
(schema_name)
STRARRAY
Drops indexes that match all spatial indexes
to be found in INDEX_TABLE. Returns an array
of status reports.
get_index (table_name,
column_name,
schema_name)
INDEX_OBJ
Returns an INDEX_OBJ from INDEX_TABLE
based on the inputs
index_status (INDEX_OBJ,
schema_name)
VARCHAR2
Returns a text status for an index that matches
the metadata of the input INDEX_OBJ
index_status (table_name,
column_name,
schema_name)
VARCHAR2
Returns a text status for an index that matches
the input argument
status_normal_indexes
(schema_name)
STRARRAY
Returns an array of status reports for all normal
indexes to be found in INDEX_TABLE
status_spatial_indexes
(schema_name)
STRARRAY
Returns an array of status reports for all spatial
indexes to be found in INDEX_TABLE

CITYDB_SRS

The package CITYDB_SRS provides functions and procedures dealing with the coordinate reference system used for an 3D City Database instance. The most essential procedure is change_schema_srid to change the reference system for all spatial columns within a database schema. If a coordinate transformation is needed because an alternative reference system shall be used, the value ‘1’ should be passed to the procedure as the third parameter. If a wrong SRID had been chosen by mistake during setup, a coordinate transformation might not be necessary in case the coordinate values of the city objects are already matching the new reference system. Thus, the value 0 should be provided to the procedure, which then only changes the spatial metadata to reflect the new reference system. It can also be omitted, as 0 is the default value for the procedure. Either way, changing the CRS will drop and recreate the spatial index for the affected column. Therefore, this operation can take a lot of time depending on the size of the table. Note that in Oracle, the reference system cannot be changed for another user schema. So, there is no schema_name parameter. The is also an additional function called get_dim(column_name, table_name, schema_name) to fetch the dimension of the spatial column which is either 2 or 3.

API of the CITYDB_SRS package for PostgreSQL
Function
Return Type
Explanation
change_column_srid
(table_name, column_name,
dimension, srid, do_transform,
geometry_type, schema_name)
void
Changes the reference system for a
given geometry column. Spatial metadata
is needed to recreate the spatial index.
change_schema_srid (srid,
gml_srs_name, do_transform,
schema_name)
void
Changes the reference system for all
spatial columns inside a database schema.
The second parameter needs to be a
GML-compliant URN to the CRS
check_srid (srid)
TEXT
Returns the message ‘SRID ok’ if the CRS
with the given EPSG code exists in the
database. Returns ‘SRID not ok’ if not.
is_coord_ref_sys_3d (srid)
INTEGER
Tests if CRS with given EPSG code is a
3D CRS. Returns 1 if yes and 0 if not.
is_db_coord_ref_sys_3d
(schema_name)
INTEGER
Tests if the current CRS of a given schema
is a 3D one. Returns 1 if yes and 0 if not.
transform_or_null
(GEOMETRY, srid)
GEOMETRY
Applies a coordinate transformation on the
input geometry with the given CRS. Returns
NULL, if the input geometry is not set.

CITYDB_STAT

The package CITYDB_STAT currently only serves a single purpose: To count all entries in all tables and generate a report as an array of string values (STRARRAY data type in Oracle, text[] in PostgreSQL). The tabulator escape sequence \t is used to generate a nice looking report for the Importer/Exporter.

API of CITYDB_STAT package for Oracle
Function
Return Type
Explanation
table_content (table_name,
schema_name)
NUMBER
Returns the count result obtained from a query
against the given table
table_contents (schema_name)
STRARRAY
Returns a text array with row count results
for most tables in 3D City Database (excluding
metadata tables and system tables)

CITYDB_OBJCLASS

The CITYDB_OBJCLASS package only provides two convenience functions to cast between table names and ID values of the OBJECTCLASS table. In contrast to the previously introduced packages these functions cannot be applied against different database schemas as this would require dynamic SQL. While it would not be problem when converting single values, the performance with dynamic SQL could be a lot worse when these functions are integrated in a full table scan. Therefore, for PostgreSQL they are now part of the ‘citydb’ schema as pure SQL functions. In Oracle, they make up another PL/SQL package.

API of CITYDB_OBJCLASS package for Oracle
Function
Return Type
Explanation
objectclass_id_to_table_name
(objectclass_id)
VARCHAR2
Returns the corresponding table name to a given
object class ID
table_name_to_objectclass_ids
(table_name)
ID_ARRAY
Returns an array of object class IDs that a are
managed in the given table

CITYDB_DELETE

The package CITYDB_DELETE consists of several functions that facilitate to delete single and multiple city objects. Each function automatically takes care of integrity constraints between relations in the database. The package is meant as low-level API providing a delete function for each relation (except for linking tables) – from a single polygon in the table SURFACE_GEOMETRY (del_surface_geometry) up to a complete CityObject (del_cityobject) or even a whole CityObjectGroup (del_cityobjectgroup). This should help users to develop more complex delete operations on top of these low-level functions without re-implementing their functionality.

Most of the stored procedures take the primary key ID value of the entry to be deleted as input parameter and return the ID value if the entry has been successfully removed. So, if NULL is returned, the entry is either already gone or the deletion did not work due to an error. Nearly every delete function comes with a pendant to delete multiple entries at once. These alternative functions take an array of ID values as input and return an array of successfully deleted entries. For PostgreSQL, the array is unrolled inside the functions as PL/pgSQL can return a SET OF INTEGER values.

In order to illustrate the low-level approach of this package, assume a user wants to delete a building feature together with all its nested sub features. For this purpose, the user calls the del_building (or del_cityobject) function, which internally leads to subsequent calls to the following stored procedures:

  • del_building for the building and its dependent building parts (recursive call)
  • del_thematic_surface for dependent boundary surfaces of the building (nested call of del_opening for dependent openings of the boundary surfaces)
  • del_building_installation for dependent outer installations of the building (nested call of del_thematic_surface for boundary surfaces of the installations)
  • del_room for dependent rooms of the building (nested call of del_thematic_surface for interior boundary surfaces, del_building_installation for interior installation and del_building_furniture for furniture within the room)
  • del_address for dependent addresses that are not referenced by other buildings and bridges
  • del_implicit_geometry for each prototype geometry of a nested feature, e.g. Openings, BuildingInstallation
  • del_surface_geometry for deleting the geometry of the building and its nested features
  • del_cityobject to remove the entry in the CITYOBJECT table that corresponds to the deleted building and the deleted child features (also deletes generic attributes, external references, appearances, etc.)

Note, that global Appearances with no direct reference to a CityObject are not deleted during such a deletion process. Therefore, the method cleanup_appearances should be executed afterwards, to remove all Appearance information (incl. entries in tables APPEAR_TO_SURFACE_DATA, SURFACE_DATA and TEX_IMAGE). Like with the stored procedures from the CITYDB_OBJCLASS package, the delete functions are part of the ‘citydb’ schema and not ‘citydb_pkg’. This is not only because of a better performance without dynamic SQL. It is mandatory as the code for the delete functions is generated automatically based on the foreign keys.

The del_ prefix is used to not exceed 30 characters in Oracle. As explained in Section 3.4, managing different CityGML ADEs in different schema would require different delete scripts for each schema. A simple code block to delete objects based on a query result can look like this:

Oracle:

-- single version
DECLARE
  deleted_id NUMBER;
  dummy_ids ID_ARRAY := ID_ARRAY();
BEGIN
  FOR rec IN (SELECT * FROM cityobject WHERE ...) LOOP
    deleted_id := citydb_delete.del_cityobject(rec.id);
  END LOOP;
  dummy_ids := citydb_delete.cleanup_appearances;
END;
-- array version
DECLARE
  pids ID_ARRAY := ID_ARRAY();
  deleted_ids ID_ARRAY := ID_ARRAY();
  dummy_ids ID_ARRAY := ID_ARRAY();
BEGIN
  SELECT id BULK COLLECT INTO pids
    FROM cityobject WHERE ...;

  deleted_ids := citydb_delete.del_cityobject(pids);
  dummy_ids := citydb_delete.cleanup_appearances;
END;

PostgreSQL:

-- single version
SELECT citydb.del_cityobject(id) FROM cityobject WHERE ... ;
SELECT citydb.cleanup_appearances();

-- array version
SELECT citydb.del_cityobject(array_agg(id))
  FROM cityobject WHERE ... ;
SELECT citydb.cleanup_appearances();

Which delete function to use depends on the ratio between the number of entries to be deleted and the total count of objects in the database. One array delete executes each necessary query only once compared to numerous single deletes and can be faster. However, if the array is huge and covers a great portion of the table (say 20% of all rows) it might be faster to go for the single version instead or batches of smaller arrays. Nested features are deleted with arrays anyway.

The previously available CITYDB_DELETE_BY_LINEAGE package has been included into the CITYDB_DELETE package and reduced to only one function. It allows to delete multiple city objects that share a common value in the LINEAGE column of the CITYOBJECT table. The procedure cleanup_schema provides a convenient way to reset an entire 3DCityDB instance under both Oracle and PostgreSQL. After invoking this procedure, all entries from all tables are deleted and all sequences are reset.

The following table only lists functions that differ from each other where del_cityobject stands for the general layout of a delete function:

API of the CITYDB_DELETE package for Oracle
Function
Return Type
Explanation
cleanup_appearances
(only_global)
ID_ARRAY
Removes unreferenced Appearences incl.
SurfaceData and textures and returns an array of
their IDs. Pass 1 (default) to only delete global
appearances, or 0 to include local appearances
cleanup_schema
(schema_name)
void
Truncates most tables and resets sequences in a
given 3D City Database schema
cleanup_table (table_name)
ID_ARRAY
Removes entries in given table which are not
referenced by any other entities
del_cityobject (NUMBER)
NUMBER
Removes the CityObject with the given ID incl.
all references to other tables. The ID value
is returned on success
del_cityobject (ID_ARRAY)
ID_ARRAY
Removes CityObjects with the given IDs incl.
all references to other tables. An array of
IDs of successfully deleted objects is returned
del_cityobjects_by_lineage
(lineage_value)
ID_ARRAY
Removes all CityObjects on behalf of a LINEAGE
value and returns an array of their IDs
API of the CITYDB_DELETE package for PostgreSQL
Function
Return Type
Explanation
cleanup_appearances
(only_global)
SET OF INTEGER
Removes unreferenced Appearences incl.
SurfaceData and textures and returns an array of
their IDs. Pass 1 (default) to only delete global
appearances, or 0 to include local appearances
cleanup_schema
(schema_name)
void
Truncates most tables and resets sequences in a
given 3D City Database schema
cleanup_table (table_name)
SET OF INTEGER
Removes entries in given table which are not
referenced by any other entities
del_cityobject (INTEGER)
INTEGER
Removes the CityObject with the given ID incl.
all references to other tables. The ID value
is returned on success
del_cityobject ((INTEGER[ ])
SET OF INTEGER
Removes CityObjects with the given IDs incl.
all references to other tables. An array of
IDs of successfully deleted objects is returned
del_cityobjects_by_lineage
(lineage_value)
SET OF INTEGER
Removes all CityObjects on behalf of a LINEAGE
value and returns an array of their IDs

CITYDB_ENVELOPE

The package CITYDB_ENVELOPE provides functions that allow a user to calculate the maximum 3D bounding volume of a CityObject identified by its ID. For each feature type, a corresponding function is provided starting with env_ prefix. In PostgreSQL, they are part of an instance schema like ‘citydb’ and not ‘citydb_pkg’ due to unforeseen schema changes by adding CityGML ADEs.

The bounding volume is calculated by evaluating all geometries of the city object in all LoDs including implicit geometries. In PostGIS, they are first collected and then fed to the ST_3DExtent aggregate function which returns a BOX3D object. In Oracle the aggregate function SDO_AGGR_MBR is used which produces a 3D optimized rectangle with only two points. The box2envelope function turns this output into a diagonal cutting plane through the calculated bounding volume. This surface representation follows the definition of the ENVELOPE column of the CITYOBJECT table as discussed in Section 3.2.2 (see also Fig. 3.27). All functions in this package return such a geometry.

The CITYDB_ENVELOPE API also allows for updating the ENVELOPE column of the city objects with the calculated value (by simply setting the set_envelope argument that is available for all functions to 1). This is useful, for instance, whenever one of the geometry representations of the city object has been changed or if the ENVELOPE column could not be (correctly) filled during import and, for example, is NULL.

To calculate and update the ENVELOPE of all city objects of a given feature type, use the get_envelope_cityobjects function and provide the OBJECTCLASS_ID as parameter. If 0 is passed as OBJECTCLASS_ID, then the ENVELOPE columns of all city objects are updated. To update only those ENVELOPE columns having NULL as value, set the only_if_null parameter to 1.

API of the CITYDB_ENVELOPE package for PostgreSQL
Function
Return Type
Explanation
box2envelope (BOX3D)
GEOMETRY
Takes a BOX3D and returns a 3D polygon that
represents a diagonal cutting plane through this
box. Under Oracle the input is an optimized 3D
rectangle (SDO_INTERPRETATION = 3)
env_cityobject (cityobject_id,
set_envelope)
GEOMETRY
Returns the current envelope representation of
the given CityObject and optionally updates the
ENVELOPE column
get_envelope_cityobjects
(objectclass_id, set_envelope,
only_if_null)
GEOMETRY
Returns the current envelope representation of
all CityObjects of given object class and
optionally updates the ENVELOPE column with
the individual bounding boxes
get_envelope_implicit_geometry
(implicit_rep_id, reference_point,
transformation_matrix)
GEOMETRY
Returns the envelope of an implicit geometry
which has been transformed based on the
passed reference point and transformation
matrix
update_bounds (old_box,
new_box)
GEOMETRY
Takes two GEOMETRY objects to call
box2envelope and returns the result. If one
side is NULL, the non-empty input is
returned.

3D City Database using Docker

3D City Database on Docker

The 3DCityDB Docker images are available for PostgreSQL/PostGIS and Oracle. The PostgreSQL/PostGIS version is based on the official PostgreSQL and PostGIS Docker images. The Oracle version is based on the Oracle Database Enterprise Edition images available from the Oracle Container registry. The images described here are available for 3DCityDB version v4.1.0 and newer. Images for older 3DCityDB versions are available from TUM-GIS 3DCityDB Docker images.

When designing the images we tried to stay as close as possible to the behavior of the base images and the 3DCityDB Shell scripts. Thus, all configuration options you may be used to from the base images are available for the 3DCityDB Docker images as well.

Synopsis

Synopsis 3DCityDB Docker PostgreSQL/PostGIS
docker run --name 3dciytdb -p 5432:5432 -d \
    -e POSTGRES_PASSWORD=<theSecretPassword> \
    -e SRID=<EPSG code> \
    [-e HEIGHT_EPSG=<EPSG code>] \
    [-e GMLSRSNAME=<mySrsName>] \
    [-e POSTGRES_DB=<database name>] \
    [-e POSTGRES_USER=<username>] \
    [-e POSTGIS_SFCGAL=<true|false|yes|no>] \
  3dcitydb/3dcitydb-pg
Synopsis 3DCityDB Oracle
docker run --name 3dciytdb -p 1521:1521 -d \
    -e ORACLE_USER=<theUserName> \
    -e ORACLE_PASSWORD=<theSecretPassword> \
    -e SRID=<EPSG code> \
    [-e HEIGHT_EPSG=<EPSG code>] \
    [-e GMLSRSNAME=<mySrsName>] \
    [-e ORACLE_PDB=<pluggable database name>] \
    [-e DBVERSION=<oracle license option>] \
    [-e VERSIONING=<version-enabled>] \
  3dcitydb/3dcitydb-oracle

Image variants and versions

The images are available in various variants and versions. The PostgreSQL/PostGIS images are available based on Debian and Alpine Linux, the Oracles image are based on Oracle Linux. Table 3.28 gives an overview on the available image versions.

3DCityDB Docker image variants and versions for PostgreSQL/PostGIS 14-3.2.
Tag PostGIS (Debian) PostGIS (Alpine) Oracle
edge psql-deb-build-edge psql-deb-size-edge psql-alp-build-edge psql-alp-size-edge ora-build-edge ora-size-edge
latest psql-deb-size-latest psql-alp-size-latest ora-size-edge
4.1.0 psql-deb-size-v4.1.0 psql-alp-size-v4.1.0 ora-size-edge
4.2.0 psql-deb-size-v4.2.0 psql-alp-size-v4.2.0 ora-size-edge

The edge images are automatically built and published on every push to the master branch of the 3DCityDB Github repository using the latest stable version of the base images. The latest and release image versions are only built when a new release is published on Github. The latest tag will point to the most recent release version using the latest base image version.

PostgreSQL/PostGIS images

The PostgreSQL/PostGIS images are available from 3DCityDB DockerHub and can be pulled like this:

docker pull 3dcitydb/3dcitydb-pg:TAG

The image tags are compose of the base image version, the 3DCityDB version and the image variant, <base image version>-<3DCityDB version>-<image variant>. The base image version is inherited from the PostGIS Docker images. Debian is the default image variant, where no image variant is appended to the tag. For the Alpine Linux images -alpine is appended. Currently supported base image versions are listed in Table 3.29.

Overview on supported PostgreSQL/PostGIS versions.
PostgreSQL/PostGIS version 2.5 3.0 3.1 3.2
9.5 9.5-2.5 9.5-3.0    
9.6 9.6-2.5 9.6-3.0 9.6-3.1 9.6-3.2
10 10-2.5 10-3.0 10-3.1 10-3.2
11 11-2.5 11-3.0 11-3.1 11-3.2
12 12-2.5 12-3.0 12-3.1 12-3.2
13   13-3.0 13-3.1 13-3.2
14     14-3.1 14-3.2

The full list of available tags can be found on DockerHub Here are some examples for full image tags:

docker pull 3dcitydb/3dcitydb-pg:9.5-2.5-4.2.0
docker pull 3dcitydb/3dcitydb-pg:13-3.2-4.2.0
docker pull 3dcitydb/3dcitydb-pg:13-3.2-4.2.0-alpine
docker pull 3dcitydb/3dcitydb-pg:13-3.2-4.2.0-alpine
Oracle images

Due to Oracle licensing conditions we cannot offer Oracle images in a public repository like DockerHub at the moment. However, you can easily build the images yourself. A detailed description of how to do that is available in Section 3.6.3.2.

Usage and configuration

A 3DCityDB container is configured by settings environment variables inside the container. For instance, this can be done using the -e VARIABLE=VALUE flag of docker run. The 3DCityDB Docker images introduce the variables SRID, HEIGHT_EPSG and GMLSRSNAME. Their behavior is described here. Furthermore, some variables inherited from the base images offer important configuration options, they are described separately for the PostgreSQL/PostGIS and Oracle image variants.

Tip

All variables besides POSTGRES_PASSWORD and ORACLE_PWD are optional.

SRID=<EPSG code>

EPSG code for the 3DCityDB instance. If SRID is not set, the 3DCityDB schema will not be setup in the default database and you will end up with a plain PostgreSQL/PostGIS or Oracle container.

HEIGHT_EPSG=<EPSG code>

EPSG code of the height system, omit or use 0 if unknown or SRID is already 3D. This variable is used only for the automatic generation of GMLSRSNAME.

GMLSRSNAME=<mySrsName>

If set, the automatically generated GMLSRSNAME from SRID and HEIGHT_EPSG is overwritten. If not set, the variable will be created automatically like this:

If only SRID is set: GMLSRSNAME = urn:ogc:def:crs:EPSG::SRID

If SRID and HEIGHT_EPSG are set: GMLSRSNAME = urn:ogc:def:crs,crs:EPSG::SRID,crs:EPSG::HEIGHT_EPSG

PostgreSQL/PostGIS environment variables

The 3DCityDB PostgreSQL/PostGIS Docker images make use of the following environment variables inherited from the official PostgreSQL and PostGIS Docker images. Refer to the documentations of both images for much more configuration options.

POSTGRES_DB=<database name>

Sets name for the default database. If not set, the default database is named like POSTGRES_USER.

POSTGRES_USER=<username>

Sets name for the database user, defaults to postgres.

POSTGRES_PASSWORD=<password>

Sets the password for the database connection. This variable is mandatory.

POSTGIS_SFCGAL=<true|false|yes|no>

If set, PostGIS SFCGAL support is enabled. Note: SFCGAL is currently only available in the Debian image variant. Setting the variable on Alpine images will have no effect.

Oracle environment variables
DBUSER=<username>

The database user name of the 3DCityDB instance to be created. The default value is ‘citydb’.

ORACLE_PWD=<password>

The database password of the 3DCityDB instance to be created. This variable is mandatory.

ORACLE_PDB=<pluggable database name>

set the name of the pluggable database (PDB) that should be used (default: ‘ORCLPDB1’). Requires Oracle 12c or higher.

DBVERSION=<oracle license option>

‘S’ (default value) or ‘L’ to choose the Oracle Spatial or Locator license option for the 3DCityDB instance to be created.

VERSIONING=<version-enabled>

‘yes’ or ‘no’ (default value) to specify whether the 3DCityDB instance should be versioned-enabled based on the Oracle’s Workspace Manager.

How to build images

This section describes how to build 3DCityDB Docker images on your own. Both the PostgreSQL/PostGIS and Oracle version offer one build argument, that can be used to set the tag of the base image that is used.

BASEIMAGE_TAG=<tag of the base image>

Tag of the base image that is used for the build. Available tags can be found on DockerHub for the PostgreSQL/PostGIS images and in the Oracle container registry.

PostgreSQL/PostGIS

The PostgreSQL/PostGIS images are build by cloning the 3DCityDB Github repository and running docker build:

  1. Clone 3DCityDB Github repository and navigate to the postgresql folder in the repo:

    git clone https://github.com/3dcitydb/3dcitydb.git
    cd 3dcitydb/postgresql/
    

2. Build the Postgresql/PostGIS image using docker build:

docker build -t 3dcitydb/3dcitydb-pg .

# or with a specific base image tag
docker build -t 3dcitydb/3dcitydb-oracle \
    --build-arg BASEIMAGE_TAG=14-3.2 \
  .
Oracle

To build Oracle 3DCityDB Docker images, you need to create an Oracle account and accept the licensing conditions first:

  1. Visit https://login.oracle.com/mysso/signon.jsp and create an account.

  2. Visit https://container-registry.oracle.com and navigate to Database. Click the Continue button in the right column of the enterprise repository. Scroll to the bottom of the license agreement, which should be displayed now and click accept.

  3. The repository listing should now show a green hook for the enterprise repository, as shown in the example below. oracle-license

    If this is the case, you are ready to pull the required base images from Oracle container registry.

  4. Signin Docker to the Oracle container registry using the account credentials from above using docker login:

    docker login container-registry.oracle.com
    
  5. Clone the 3DCityDB repository and navigate to the oracle folder in the repo:

git clone https://github.com/3dcitydb/3dcitydb.git
cd 3dcitydb/oracle/
  1. Build the 3DCityDB Oracle image using docker build:

    docker build -t 3dcitydb/3dcitydb-oracle .
    
    # or with a specific base image tag
    docker build . \
      -t 3dcitydb/3dcitydb-oracle \
      --build-arg BASEIMAGE_TAG=19.3.0.0
    

After the build process has finished, you are ready to use the image (see Section 3.6.2 and Section 3.6.2.2) or push it to a private Docker repository.

Performance tuning for PostgreSQL/PostGIS containers

PostgreSQL databases offer a wide range of configuration parameters that affect database performance and enable e.g. parallelization of queries. Database optimization is a complex topic but using PGTune you can easily get a set of configuration options, that may help to increase database performance.

  1. Visit the PGTune website, fill in the form and generate a set of parameters for your system. You will get something like this:

    # DB Version: 13
    # OS Type: linux
    # DB Type: mixed
    # Total Memory (RAM): 8 GB
    # CPUs num: 8
    # Connections num: 20
    # Data Storage: ssd
    
    max_connections = 20
    shared_buffers = 2GB
    effective_cache_size = 6GB
    maintenance_work_mem = 512MB
    checkpoint_completion_target = 0.9
    wal_buffers = 16MB
    default_statistics_target = 100
    random_page_cost = 1.1
    effective_io_concurrency = 200
    work_mem = 13107kB
    min_wal_size = 1GB
    max_wal_size = 4GB
    max_worker_processes = 8
    max_parallel_workers_per_gather = 4
    max_parallel_workers = 8
    max_parallel_maintenance_workers = 4
    
  2. Pass these configuration parameters to postgres (see emphasized line) using the the -c option when starting your 3DCityDB container with docker run.

    docker run -d -i -t --name citydb -p 5432:5342 \
      -e SRID=25832 \
      -e POSTGRES_PASSWORD=changeMe \
    3dcitydb/3dcitydb-pg postgres \
      -c max_connections=20 \
      -c shared_buffers=2GB \
      -c effective_cache_size=6GB \
      -c maintenance_work_mem=512MB \
      -c checkpoint_completion_target=0.9 \
      -c wal_buffers=16MB \
      -c default_statistics_target=100 \
      -c random_page_cost=1.1 \
      -c effective_io_concurrency=200 \
      -c work_mem=13107kB \
      -c min_wal_size=1GB \
      -c max_wal_size=4GB \
      -c max_worker_processes=8 \
      -c max_parallel_workers_per_gather=4 \
      -c max_parallel_workers=8 \
      -c max_parallel_maintenance_workers=4
    

Creating 3DCityDB Docker images including data

In general, it is not recommended to store data directly inside a Docker image and use docker volumes instead. Volumes are the preferred mechanism for persisting data generated by and used by Docker containers. However, for some use-cases it can be very handy to create a Docker image including data. For instance, if you have automated tests operating on the exact same data every time or you want to prepare a 3DCityDB image including data for a lecture or workshop, that will run out of the box, without having to import data first.

Warning

The practise described here has many drawbacks and is a potential security threat. It should not be performed with sensitive data!

Here is how to create an image with data:

  1. Choose a 3DCityDB image that is suitable for you purpose. You will not be able to change the image version later, as you could easily do when using volumes (the default). Available versions are listed in Image variants and versions. To update an image with data, it has to be recreated from scrap using the desired/updated base image.
  2. Create a Docker network and start a 3DCityDB Docker container:
docker network create citydb-net

docker run -d --name citydbTemp \
  --network citydb-net \
  -e "PGDATA=/mydata" \
  -e "POSTGRES_PASSWORD=changeMe" \
  -e "SRID=25832" \
3dcitydb/3dcitydb-pg:latest-alpine

Warning

The database credentials and settings provided in this step cannot be changed when later on creating containers from this image!

Note down the database connection credentials (db name, username, password) or you won’t be able to access the content later.

  1. Import data to the container. For this example we are using the LoD3 Railway dataset and the 3DCityDB Importer/Exporter Docker image:
docker run -i -t --rm --name impexp \
    --network citydb-net \
    -v /d/temp:/data \
  3dcitydb/impexp:latest-alpine import \
    -H citydbTemp \
    -d postgres \
    -u postgres \
    -p changeMe \
    /data/Railway_Scene_LoD3.zip
  1. Stop the running 3DCityDB container, remove the network and commit it to an image:
docker stop citydbTemp
docker network rm citydb-net
docker commit citydbTemp 3dcitydb/3dcitydb-pg:4.1.0-alpine-railwayScene_LoD3
  1. Remove the 3DCityDB container:
docker rm -f -v citydbTemp

We have now created a 3DCityDB image that contains data that can e.g. be pushed to a Docker registry or exported as TAR. When creating containers from this image, it is not required to specify any configuration parameter as you usually would, when creating a fresh 3DCityDB container.

docker run --name cdbWithData --rm -p 5432:5432 \
  3dcitydb/3dcitydb-pg:4.1.0-alpine-railwayScene_LoD3

To connect to the database, use the credentials you set in step 2. The following example lists the tables of the DB running in the container using psql.

$ export PGPASSWORD=postgres
$ query='SELECT COUNT(*) FROM citydb.cityobject;'
$ psql -h localhost -p 5432 -U postgres -d postgres -c "$query"

count
-------
  231
(1 row)

Importer/Exporter

Note

This is the documentation of the Importer/Exporter version 5.1.

The Importer/Exporter tool is a Java-based client for the 3D City Database and allows for high-performance loading and extracting 3D city model data.

It offers a graphical user interface (GUI) for convenient use on desktop computers by end-users. And it can also be run without a GUI via its command-line interface (CLI). The CLI is useful on headless systems or to embed the Importer/Exporter in batch processing workflows and third-party applications.

This chapter explains the functionality and use of the Importer/Exporter along the GUI of the tool. Section 4.9 is then dedicated to a discussion of the CLI. For system requirements and a documentation of the installation procedure, please refer to Section 1.1 .

Launching the Importer/Exporter

The 3D City Database Importer/Exporter offers both a graphical user interface (GUI) and a command-line interface (CLI). The CLI allows for embedding the tool in batch processing workflows and third-party applications. The usage of the CLI is documented in Section 4.9.

To launch the GUI, simply use the starter script located in the installation directory of the Importer/Exporter. A desktop icon as well as shortcuts in the start menu of your operating system will additionally be available in case you chose to create shortcuts during setup. Depending on your platform, one of the following starter scripts is provided in the installation directory:

  • 3DCityDB-Importer-Exporter.bat (Microsoft Windows family)
  • 3DCityDB-Importer-Exporter (UNIX/Linux/Mac OS family)

On most platforms, double-clicking the starter script or its shortcut runs the Importer/Exporter.

For some UNIX/Linux distributions, you will have to run the starter script from within a shell environment though. Please open your favourite shell and first check whether execution rights are correctly set on the starter script. If not, change to the installation folder and enter the following command to make the starter script executable for the owner of the file:

$ chmod u+x 3DCityDB-Importer-Exporter

Afterwards, simply run the software by issuing the following command:

$ ./3DCityDB-Importer-Exporter

Note

With every release, the README.txt file in the installation folder provides version-specific information on how to run the Importer/Exporter.

Using environment variables in the launch process

The Importer/Exporter is launched with default options for the Java Virtual Machine (JVM) that runs the application. You can override these default options in the launch process by using the environment variable JAVA_OPTS.

For example, JAVA_OPTS can be used to control the amount of main memory that the Importer/Exporter can use. By default, the tool runs with at minimum 1 GB of main memory. The maximum available main memory depends on your Java version. When running on Java 11, for instance, the Importer/Exporter can use 25% of the available physical main memory of your machine with an upper limit of 25 GB. This value should be reasonable on most platforms and for most import/export processes. However, when working with very large large CityGML top-level features or CityJSON input files, you might need to increase the memory limits to avoid out-of-memory exceptions. This can be done with the JVM parameters -Xms for the initial memory size and -Xmx for the maximum memory size (please check the documentation of your Java installation for more details).

The following snippet shows how to use JAVA_OPTS to set the memory limits to an initial size of 1 GB and a maximum size of 8 GB under Windows.

set JAVA_OPTS="-Xms1G -Xmx8GB"

For UNIX/Linux, it looks almost the same:

export JAVA_OPTS="-Xms1G -Xmx8GB"

Simply copy this line into the start script 3DCityDB-Importer-Exporter and make sure to put it before the last line in this script. Note that you can set any other JVM option as well.

Instead of JAVA_OPTS you can also use IMPEXP_OPTS as environment variable. Both are supported by the Importer/Exporter and evaluated in the launch process.

Adapting the CLI start script

The start script 3DCityDB-Importer-Exporter is only a wrapper and invokes the CLI script impexp of the Importer/Exporter, which is located in the bin folder of the installation directory. To get more fine-grained control over the launch process, you can also edit this CLI start script directly. Good knowledge in shell scripting is required when doing so.

Using the graphical user interface (GUI)

The graphical user interface of the Importer/Exporter is organized into four main components as shown in Fig. 4.1. A menu bar [1] is anchored to the top of the window (Windows, some Linux distributions) or to the top of the screen (Mac, some Linux distributions). The main application window is divided into an operations window [2] that renders the user dialogs of the separate operations of the Importer/Exporter and a console window [4] that displays log messages. Via the View entry in the menu bar, the console window can be detached from the main window and rendered in a separate window. At the bottom of the operations window, a status bar [3] provides information about running processes and database connections.

_images/impexp_gui_organization_fig.png

Organization of the Importer/Exporter GUI.

The tab menu on top of the operations window lets you switch between the main operations of the Importer/Exporter and their user dialogs. The following tabs are available:

  • Import: Import CityGML or CityJSON datasets into the database
  • Export: Export city model data in CityGML or CityJSON format
  • VIS Export: Export city model data in KML, COLLADA or glTF format for visualization
  • Database: Database connection settings and operations
  • Preferences: Preference settings for each operation

Note

If you have installed plugins, the tab menu may contain additional entries. Please refer to the documentation of your plugin in this case. The installed plugins can be managed on the Preferences tab (cf. Section 4.7.1)

The main menu bar [1] offers the entries File, View, and Help. The File menu lets a user store and load application settings from a config file and close the application.

File menu
Open Settings…
Load a config file and recover all settings from this file.
Save Settings
Save all settings made in the GUI to the default config file.
Save Settings As…
Save all settings made in the GUI to a separate config file.
Restore Default Settings
Set all settings to default values.
Save Settings XSD As…
Save the XML Schema defining the XML structure of config files to a
separate file. The XML Schema is helpful in case a user wants to
manually edit the config file. Only config files conforming to the XML Schema definition will be successfully loaded by the Importer/Exporter.
Recently Used Settings
List of recently loaded config files.
Exit
Close the Importer/Exporter application.

The Importer/Exporter uses one default config file per operating system user running the Importer/Exporter. All settings made in the GUI are automatically stored in this default config file when the Importer/Exporter is closed and are loaded from this file upon program start. The default config file is named project.xml and is stored in the home directory of the user. Precisely, you will find the config file in the subfolder 3dcitydb/importer-exporter/config. Note that the location of the home directory differs for different operating systems. Using environment variables, the location can be identified dynamically:

  • %HOMEDRIVE%%HOMEPATH%\3dcitydb\importer-exporter\config (Windows 7 and higher)
  • $HOME/3dcitydb/importer-exporter/config (UNIX/Linux, Mac OS families)

The View menu affects the GUI elements of the Importer/Exporter and offers the following entries:

View menu
Open Map Window
Open a 2D map window for bounding box selection (cf. Section 4.8).
Detach Console
Render the console window in a separate application window.
Light Mode
Dark Mode
Switch between a light and a dark interface style.
Restore default perspective
Restore the GUI to its default settings.

Finally, the Help menu provides a link to the user manual and further information about the Importer/Exporter.

Help menu
Online-Documentation…
Open this user manual in a web browser.
Read Me
Open README.txt file shipped with the Importer/Exporter.
About
Display general information like the official version number of the Importer/Exporter and development partners.

Database connections and operations

The Database tab of the operations window shown in the figure below allows a user to manage and establish database connections and to execute database operations.

_images/impexp_gui_database_tab_fig.png

Database tab.

In order to connect to an instance of the 3D City Database, valid connection parameters must be entered in the above dialog. Mandatory database connection details comprise the username and password of the database user, the type of the database, the server name (network name or IP address) and port number (default: 5432 for PostgreSQL; 1521 for Oracle) of the database server, and the database name (when using Oracle, enter the database SID or service name here). For convenience, a user can choose to save the password in the config file of the Importer/Exporter. Please be aware that the password is stored as plain text.

The optional schema parameter lets you define the database schema you want to connect to. Leave it empty to connect to the default schema. The Query button lets you retrieve a list of available schemas from the database. More information on how to work with multiple 3DCityDB schemas can be found in Section 3.4.

For Oracle databases, you can additionally choose to connect to a specific workspace in case the database is version-enabled. All operations of the Importer/Exporter will be executed against this workspace. Please provide the name of the workspace and an optional timestamp. If no workspace is specified, the default LIVE workspace is chosen by default. Again, use the Query button to get a list of available workspaces in the database.

Hint

If you need assistance, ask your database administrator for connection details, schemas and workspaces.

To manage more than one database connection, connection details are assigned a short description text. The drop-down list at the top of the Database tab allows a user to switch between connections based on their description. By using the Apply, New, Copy and Delete buttons, edits to the parameters of the currently selected connection can be saved, a new connection with empty connections details can be created, and existing connections can be copied or deleted from the list.

The Connect / Disconnect button lets a user connect to / disconnect from a 3D City Database instance based on the provided connection details.

Note

With this version of the Importer/Exporter, you will be able to connect to versions 4.2 to 3.0 instances of the 3D City Database but not to any previous or later version. See Section 1.4 for a guide on how to migrate a version 2 and 3 instances of the 3D City Database to the latest version 4.2.

Connection messages

The console window logs all messages that occur during the connection attempt. In case a connection could not be established, error messages are displayed that help to identify the cause of the connection problem. Otherwise, the console window contains information about the connected 3D City Database instance like those shown in Fig. 4.3. This information comprises the version of the 3D City Database, the name and version of the underlying database system, the connection string, the schema name, the spatial reference system ID (SRID) as well as its name and GML encoding (as specified during the setup of the 3D City Database).

_images/impexp_db_connection_log_fig.png

Log messages for a successful database connection.

This information can be requested from a connected 3D City Database at any time using the Info button on the Database tab. Upon successful connection, the description of the active connection is moreover displayed in the title bar of the application window.

Executing database operations

After having established a connection to an instance of the 3D City Database, the Database tab (cf. [2] in Fig. 4.2) offers the following database operations as separate tabs below the database connection details:

Database report

A database report is a list of all tables of the 3D City Database together with their number of rows. This operation therefore provides a quick overview of the contents of the 3D City Database. The report is printed to the console window.

_images/impexp_gui_database_report_fig.png

Generating a database report.

Note

The database report always shows the total count of rows. No filters are applied in this operation. Be aware that the report is generated for the schema and/or workspace defined in the database connection details.

The following figure shows an excerpt of a database report in the console window as example.

_images/impexp_gui_database_report_console_fig.png

Excerpt of a database report printed to the console.

Calculating/updating bounding boxes

This dialog lets you calculate the 2D bounding box of the city objects stored in the database. The bounding box is useful, for instance, as spatial filter for the different import and export operations of the Importer/Exporter (see documentation of the corresponding operations) or for use in external tools.

_images/impexp_gui_calc_boundingbox_fig.png

Calculating the bounding box for selected feature types.

First, the top-level feature type needs to be selected for which the bounding box should be calculated [1]. The default option core:_CityObject will operate on all city objects in the database, but you can also restrict the calculation to a specific type such as bldg:Building or wtr:WaterBody. The bounding box can optionally be transformed into a user-defined coordinate reference system [2]. By default, the bounding box is presented in the same reference system as specified for the 3D City Database instance during setup. See Section 3.3 for details on how to define and manage user-defined reference systems.

If your database contains terminated city objects, the optional Feature Version filter [6] lets you define the version of the city objects that should be used for the calculation. The default option Latest version will only operate on non-terminated objects. With Valid version, you can specify that only city objects that were valid at a given timestamp or within a given time range should be considered. If you want the bounding box to be calculated for all city objects independent of whether they are terminated or not, simply disable the filter.

Note

The Feature Version filter works on the CREATION_DATE and TERMINATION_DATE columns of the table CITYOBJECT. More information can be found in Section 4.5.1.

To trigger the calculation, press the Calculate button. The coordinates of the lower left (xmin, ymin) and upper right (xmax, ymax) corner of the resulting bounding box are rendered in the corresponding fields of the dialog [3]. The values are also copied to the clipboard of your operating system and can therefore easily be pasted into the import and export dialogs. You can also manually copy the values to the clipboard by clicking the bbox_copy button [4], or by right-clicking on a text field [3] and choosing the corresponding option from the context menu.

By using the map map_select button [4], the calculated bounding box is rendered in a separate 2D map window for visual inspection as shown below. The usage of this map window is described in Section 4.8.

_images/impexp_map_window_fig.png

Map window for displaying and choosing bounding boxes. Note that the coordinate values of the bounding box are shown in the upper left component.

The calculation of the bounding box is based on the values stored in the ENVELOPE column of the CITYOBJECT table. If this column is NULL or contains an incorrect value (e.g., in case the value could not correctly be filled during import or the geometry representation of a city object has been changed), then the resulting bounding box will be wrong and subsequent operations might not provide the expected result. To fix the ENVELOPE values in the database, simply let the Importer/Exporter create missing values (i.e., replace NULL values) or recreate all values by clicking on the corresponding buttons [5]. This update process either affects only the city objects of a given feature type or all city objects based on the selection made in [1].

Note

This process directly updates the ENVELOPE column of the affected city objects and might take long to complete since the new values are calculated by evaluating all geometries of the city objects in all LoDs including implicit geometries.

Managing indexes

The Importer/Exporter allows the user to manually activate or deactivate indexes on predefined tables of the 3D City Database schema and to check their status.

_images/impexp_gui_managing_indexes_fig.png

Managing spatial and normal indexes.

The operation dialog differentiates between spatial indexes on geometry columns and normal indexes on columns with any other datatype. The buttons Activate, Deactivate, and Status trigger a corresponding database process on spatial indexes only, normal indexes only or both index types depending on which checkboxes are selected.

The VACUUM button is only available for PostgreSQL databases and performs a VACUUM ANALYZE operation on the columns of the selected indexes. This maintenance operation gathers and updates statistics on the columns to be able to choose the most efficient query plans and optimize the speed of query processing. Note that for most PostgreSQL databases, VACUUM is already run automatically at regular intervals.

The index operations only affect the following subset of all indexes defined by the 3D City Database schema:

Spatial and normal indexes affected by the index operation
Index type
Column(s)
Table
Spatial
ENVELOPE
CITYOBJECT
Spatial
GEOMETRY
SURFACE_GEOMETRY
Spatial
SOLID_GEOMETRY
SURFACE_GEOMETRY
Normal
GMLID, GMLID_CODESPACE
CITYOBJECT
Normal
LINEAGE
CITYOBJECT
Normal
CREATION_DATE
CITYOBJECT
Normal
TERMINATION_DATE
CITYOBJECT
Normal
LAST_MODIFICATION_DATE
CITYOBJECT
Normal
GMLID, GMLID_CODESPACE
SURFACE_GEOMETRY
Normal
GMLID, GMLID_CODESPACE
APPEARANCE
Normal
THEME
APPEARANCE
Normal
GMLID, GMLID_CODESPACE
SURFACE_DATA
Normal
GMLID, GMLID_CODESPACE
ADDRESS

The result of an index operation is reported in the console window. For instance, Fig. 4.9 shows the result of a status query on both spatial and normal indexes. The status ON means that the corresponding index is enabled.

_images/impexp_gui_indexes_status_report_fig.png

Result of a status query on spatial and normal indexes.

Note

It is strongly recommended to deactivate the spatial indexes before running a CityGML/CityJSON import on a big amount of data and to reactive the spatial indexes afterwards. This way the import will typically be a lot faster than with spatial indexes enabled. The situation may be different when importing only a small dataset.

Caution

Activating and deactivating indexes can take a long time, especially if the database fill level is high. Note that the operation cannot be aborted by the user since this would result in an inconsistent database state.

Managing the spatial reference system

When setting up a 3DCityDB instance, you have to choose a spatial reference system (SRS) by picking a spatial reference ID (SRID) supported by the database and a corresponding SRS name identifier (gml:srsName) that is used in CityGML/CityJSON exports (see and Section 1.3). These settings can be easily changed at any later time using the reference system operation.

_images/impexp_gui_change_srs_fig.png

Changing the SRS information of the 3DCityDB instance.

After connecting to a 3DCityDB, the SRID and gml:srsName input fields shown in the above dialog [1] are assigned the current values from the database. Simply edit the fields to pick a new SRID or SRS name identifier. Since changing the SRID potentially affects all geometries in your database and thus may take a long time to complete, the SRID field is disabled per default. Click on Edit [2] to enable changes to this field. Use the Check button [2] to make sure that your new SRID value is supported by the database. The gml:srsName field provides a drop-down list of common SRS identifier encoding schemes (such as OGC URN encoding, see Section 3.3). You may pick one of these proposals (be careful to replace the HEIGHT_SRID token with the correct value if required) or enter any other value.

When changing the SRID, you can choose whether the coordinates of geometry objects already stored in the database should be transformed to the new SRID or whether only the metadata should be updated [3]. The latter option might be enough, for example, if you accidentally picked a wrong SRID that does not match the imported geometries when setting up the database, and you simply want to correct this mistake.

Click on Apply to update the reference system information in the database according to your settings. The Restore button lets you discard any changes made to the SRID and gml:srsName fields.

Note

If you just want to use different gml:srsName values for different CityGML exports, then instead of changing the identifier in the database before every export it is simpler to create multiple user-defined reference systems for the same SRID (cf. Section 4.7.2) and pick one for each export (cf. Section 4.5).

Displaying supported CityGML ADEs

This tab provides a list of all CityGML Application Domain Extensions (ADEs) that are registered in the 3DCityDB instance and/or are supported by the Importer/Exporter. The following screenshot shows the corresponding dialog.

_images/impexp_gui_ADE_list_fig.png

Table of all supported CityGML ADEs.

The ADE table contains one entry per CityGML ADE. Each entry lists the name and the version of the ADE and indicates whether it is supported by the database and/or the Importer/Exporter (using check or cross signs). Database support requires that the ADE has been successfully registered in the 3DCityDB instance using the ADE Manager Plugin (see Section 5.3). Additional support by the Importer/Exporter requires that a corresponding ADE extension has been copied into the ade-extensions folder within the installation directory of the Importer/Exporter. Only if both conditions are met both fields will contain a check sign. If no ADE has been detected upon database connection, the table remains empty.

In the example of Fig. 4.11, there is only an Importer/Exporter extension for an ADE called KitEnergyADE but the connected 3DCityDB instance lacks support for it. EnergyADE data would therefore not be handled by the Importer/Exporter and thus not stored into the database in this scenario.

If you select an entry in the ADE table and click the Info button (or simply double-click on the entry), metadata about the ADE will be displayed in a separate window as shown below. The Status field shows whether the ADE is fully supported or some user action is required. The Encoding checkboxes illustrate for which output formats the ADE is available. Thus, ADE content will only be exported when choosing a supported output format for the export operation (see Section 4.5).

_images/impexp_ADE_metadata_dialog_fig.png

ADE metadata dialog.

Import

Attribute filter

The attribute filter takes an object identifier and/or a gml:name as parameter and only imports top-level features having a matching value for the respective attribute.

_images/impexp_import_attribute_filter.png

Attribute filter for import operations.

More than one identifier can be provided in a comma-separated list. Multiple gml:name values are not supported though.

The gml:name search string supports two wildcard characters: “*” representing zero or more characters and “.” representing a single character. You can use the escape character “" to escape the wildcards. For example, if you provide *abc for the gml:name filter, then features with a gml:name of “xyzabc” and “abc” will both be imported. If you enter \*abc instead, the gml:name must exactly match “*abc” for the feature to be imported.

Import list filter

The import list filter allows you to provide a list of city objects that shall be imported or skipped during import.

_images/impexp_import_list_filter.png

Import list filter for import operations.

Import lists are simple comma-separated values (CSV) files that contain the identifiers of the city objects to be imported or skipped. Each identifier must be put on a separate line (row) of the file, and each line may contain additional values (columns) separated by a delimiter (typically a single reserved character such as comma, semicolon, tab, etc.). The first record may be reserved as header containing a list of column names. Usually, every row has the same sequence of columns. If a line starts with a predefined comment marker (typically a single reserved character such as #), the entire row is ignored and skipped.

Due to their simple structure, import lists can be easily created with external tools and processes. The following snippet shows an example of a simple import list that can be used with this import filter. It just provides an identifier per row. The first line is used as header.

1
2
3
4
5
6
GMLID
ID_0815
ID_0816
ID_0817
ID_0818
...

To use an import list, simply provide the full path to the CSV file. The further input fields of the dialog define the structure and content of the CSV file so that the identifiers can be correctly parsed from the file and used in the import operation. For this purpose, you can specify the delimiter used for separating values in the import list (by default, a comma is assumed as delimiter). If the values in the import list are quoted, you can also define the character used as quote (typically double quotes are used). And the character used as comment marker as well as the encoding of the import list can be specified.

Use the Preview button to get a preview of the first few lines of the import list when applying the provided options for parsing and interpreting the import list. This preview shows the contents of the import list in tabular form and is printed to the console window. It is very helpful to adapt and specify the delimiter character(s), quoting rules and comment marker. The preview should only show the lines containing identifiers, but no header line or comments. To identify the column which holds the identifiers in the file, you can either type in the Column name in case the import list uses a header. Alternatively, you can simply provide the Column index (note that the first column of a row has the index 1). In the latter case, you can also specify whether the first record in the CSV file shall be skipped because it is a header line.

Finally, define the mode of the import list filter. You can either choose to only import objects from the list or to skip objects from the list instead. During import, the identifiers from the import list are matched against the identifiers of the city objects in the input file. Based on the defined filter mode, matching objects are either imported or skipped.

Note

You can also use the attribute filter (see Section 4.4.1) to provide a list of identifiers of city objects to be imported. However, the list of identifiers must be entered manually using the attribute filter, whereas import lists can be generated by software.

Example use case

One use case for this filter is when using import logs for the import operation (see Section 4.4.6.11). Assume you start an import operation on a set of input files and the import is aborted or fails after a certain amount of features. The import log will contain the identifiers of those city objects that were successfully imported before the operation aborted. Thus, with this filter, you can easily resume the import after having fixed the issues that caused the failure. Since the import log is a CSV file, you can simply use it as import list and set the filter mode to skip objects from the list. When starting the import operation with these settings again, only those city objects will be imported from the input files that have not been processed in the first run.

Feature counter filter

The feature counter filter limits the number of top-level features to be imported.

_images/impexp_import_feature_counter_filter.png

Feature counter filter for import operations.

Simply enter the number of features into the count field. The start index parameter indicates the index within the set of all feature over all input files from which the import shall begin. The parameters can be used together or individually.

Note

The start index uses zero-based numbering. Thus, the first top-level feature is assigned the index 0, rather than the index 1.

Bounding box filter

The bounding box filter takes a 2D bounding box as parameter that is given by the coordinate values of its lower left (xmin, ymin) and upper right (xmax, ymax) corner. The bounding box is evaluated against the gml:boundedBy property (CityGML) respectively the “geographicalExtent” property (CityJSON) of the input features. You can choose whether features overlapping with the provided bounding box are to be imported, or whether features must be inside of it.

_images/impexp_import_bbox_filter.png

Bounding box filter for import operations.

Make sure to choose a coordinate reference system from the drop-down list that matches the provided coordinate values. Otherwise, the spatial filter may not work as expected. The coordinate reference system list can be augmented with user-defined reference systems (see Section 4.7.2 for more information).

The coordinate values of the bounding box filter can either be entered manually or chosen interactively in a 2D map window. To open the map window, click on the map button map_select.

_images/impexp_bbox_selection_map_window_fig.png

Bounding box selection using the 2D map window.

In the map window, keep the left mouse button clicked while holding the ALT key. This lets you draw a bounding box on the map. In order to move the map to a specific location or address, simply enter the location or address in the input field on top of the map and click the search button map_search or use the map navigation controls. If you are happy with the bounding box selection, click the Apply button. This will close the map window and copy the coordinate values of the selected area into the corresponding fields of the bounding box filter and set the reference system to WGS 84. Click Cancel if you want to close the map window but skip your selection. A more comprehensive guide on how to use the map window is provided in chapter Section 4.8.

With the bbox_copy button on the bounding box filter dialog, you can copy a bounding box to the clipboard, while the bbox_paste button pastes a bounding box from the clipboard to the input fields of the bounding box filter (or use the right-click context menu).

Feature type filter

With the feature types filter, you can restrict the import to one or more features types by enabling the corresponding checkboxes. Only features of the selected type(s) will be imported.

_images/impexp_import_feature_type_filter.png

Feature type filter for import operations.

The feature type filter only shows top-level feature types. It will automatically contain feature types from CityGML ADEs if a corresponding ADE extension has been correctly registered with the Importer/Exporter (see Section 5.3).

Preferences

The import operation can be customized with various different preference settings that are presented in this chapter.

General

This preferences dialog lets you define general settings affecting the import operation.

_images/impexp_import_preferences_general_fig.png

Import preferences – General.

When importing a CityGML/CityJSON dataset, the import operation might run into errors, for instance, because of invalid data violating the CityGML or CityJSON schemas, missing external resources such as texture images or invalid geometries. The default behaviour of the import operation is to fail fast on errors and to immediately cancel the import process. This way, invalid top-level city objects will not be imported into your database. If your import aborts due to errors, you can use the import log to resume or rollback the import operation (see Section 4.4.6.11 for more details).

You can disable the fail fast behaviour by unchecking the Cancel import immediately in case of errors option offered by this preferences dialog. When doing so, errors encountered during the import process are still recorded in the log but the operation tries to continue and complete the import.

Continuation

The Continuation preferences allow for specifying metadata that is assigned to every city object during import. The metadata is stored in columns of the table CITYOBJECT and is therefore accessible in SQL queries.

_images/impexp_import_preferences_continuation_fig.png

Import preferences – Continuation.

The following metadata can be set:

Metadata stored with every city object in the table CITYOBJECT.
ADE Metadata
Description
Data lineage [1]
A string value denoting the origin of the data.
(column: LINEAGE; default value: NULL)
Reason for update [1]
A string value providing the reason for a data update.
(column: REASON_FOR_UPDATE; default value: NULL)
Updating person [2]
A string value identifying the person being responsible for importing or updating the city object.
(column: UPDATING_PERSON; default value: name of the database user)
creationDate [3]
A timestamp value denoting the date of creation of the city object. If
this date is not available from the city object during import, it
may either be set to the import date or be inherited from the parent
object (if available). Alternatively, the user can choose to replace all
creation dates from the input files with the import date.
(column: CREATION_DATE; default value: import date)
terminationDate [4]
A timestamp value denoting the date of termination of the city object. If
this date is not available from the city object during import, it
may either be set to NULL or be inherited from the parent object (if
available). Alternatively, the user can choose to replace all termination
dates in the input files with NULL.
(column: TERMINATION_DATE; default value: NULL)

Note

Both creationDate and terminationDate are properties available for city objects in both CityGML and CityJSON. When exporting data from the database, the properties are therefore written to a CityGML/CityJSON dataset. The remaining metadata information, however, does not map to predefined properties and does not get exported with this version of the Importer/Exporter.

Object identifier

Globally unique object identifiers are crucial for ensuring data consistency and for enabling data management workflows. Especially when it comes to (subsequently) updating the city model content in the database, unique identifiers will help to quickly identify and replace objects in the database with candidates from external datasets. Unfortunately, both in CityGML and CityJSON, identifiers do not meet the requirement of global uniqueness since they are, per definition, only unique within the scope of a single dataset.

CityGML uses the XML attribute gml:id to store identifiers for both features and geometries as shown below:

<bldg:Building gml:id="fid_1234">
  ...
  <bldg:lod2Solid>
    <gml:Solid gml:id="gid_5678">
    ...
    </gml:Solid>
  </bldg:lod2Solid>
  ...
</bldg:Building>

In CityJSON, city objects are stored in the "CityObjects" property. The value of this property is a collection of key-value pairs, where the key is the identifier of the city object, and the value is the city object itself. Note that CityJSON does not have identifiers for geometry objects.

{
  "type": "CityJSON",
  "CityObjects": {
    "fid_1234": {
      "type": "Building",
      "geometry": [{ }]
    }
  }
}

By default, the Importer/Exporter assumes that the identifiers associated with the city objects to be imported are globally unique and therefore imports them “as is” into the database. Only in case a city object (or geometry object) lacks an identifier, a UUID value will be generated at import time and stored with the object.

_images/impexp_import_preferences_gmlid_handling_fig.png

Import preferences – Object identifier.

This default behavior can be changed with the above preferences dialog in order to let the Importer/Exporter replace all identifiers in the input file(s) with generated UUID values. The user may choose a prefix for the identifier. The original identifier value may optionally be stored as external reference to not lose this information.

In addition to the identifier, the 3DCityDB allows for storing a second GMLID_CODESPACE metadata value. The idea is that the compound value of identifier and GMLID_CODESPACE is globally unique. The user can choose to use the file name of the input file, its complete path or a user-defined string as GMLID_CODESPACE. By default, the Importer/Exporter does not import a GMLID_CODESPACE value though.

Note

The Importer/Exporter internally only relies on the identifier value to identify objects, for example, when resolving XLink references. The GMLID_CODESPACE value is meant to support user-defined data management processes in the first place.

Appearance

The Appearance preference settings define how appearance information of city objects shall be processed at import time.

_images/impexp_import_preferences_appearance_fig.png

Import preferences – Appearance.

To import appearances, simply enable the corresponding checkbox. This is also the default value. The Importer/Exporter will also import all texture image files referenced from the appearances. Both relative references (i.e., relative to the input file) and global references (e.g., web URLs) to texture files are supported and resolved. The latter might require internet access though. Alternatively, a user may choose to only import the appearance information but to skip the texture files by disabling the import texture file option.

Prior to version 1.0 of the CityGML standard, material and texture information of surface objects was stored using TexturedSurface elements. This concept was, however, replaced by the Appearance module in CityGML 1.0 and thus has been deprecated. Although the CityGML specification discourages the use of TexturedSurface elements, it is still allowed even in CityGML 2.0 datasets. The Importer/Exporter can parse and interpret TexturedSurface information but will automatically convert this information losslessly to Appearance elements. Since TextureSurface information is not organized into themes but a theme should be used for Appearance elements, the user can define a theme that shall be used in the conversion process. The default value is rgbTexture.

Geometry

Before importing the city objects into the 3D City Database, the Importer/Exporter can apply an affine coordinate transformation to all geometry objects. This option is disabled by default.

_images/impexp_import_preferences_geometry_fig.png

Import preferences – Geometry.

An affine transformation (cf. [Weis2015]) is any transformation that preserves collinearity (i.e., points initially lying on a line still lie on a line after transformation) and ratios of distances (e.g., the midpoint of a line segment remains the midpoint after transformation). It will move lines into lines, polylines into polylines and polygons into polygons while preserving all their intersection properties. Geometric contraction, expansion, dilation, reflection, rotation, skewing, similarity transformations, spiral similarities, and translation are all affine transformations, as are their combinations.

The affine transformation is defined as the result of the multiplication of the original coordinate vectors by a matrix plus the addition of a translation vector.

\[{\overrightarrow{p}}^{'} = A \bullet \overrightarrow{p} + \overrightarrow{b}\]

In matrix form using homogenous coordinates:

\[\begin{split}\begin{bmatrix} x^{'} \\ y^{'} \\ z^{'} \\ \end{bmatrix} = \begin{bmatrix} m_{11} & m_{12} & m_{13} & m_{14} \\ m_{21} & m_{22} & m_{23} & m_{24} \\ m_{31} & m_{32} & m_{33} & m_{34} \\ \end{bmatrix} \bullet \begin{bmatrix} x \\ y \\ z \\ 1 \\ \end{bmatrix}\end{split}\]

The coefficients of this matrix and translation vector can be entered in this preferences dialog (cf. Fig. 4.23). The first three columns define any linear transformation; the fourth column contains the translation vector. The affine transformation does neither affect the dimensionality nor the associated reference system of the geometry object, but only changes its coordinate values. It is applied the same to all coordinates in all objects in the input file. This also includes all matrices in the data like the 2x2 matrices of GeoreferencedTextures (CityGML only), the 3x4 transformation matrices of TexCoordGen elements (CityGML only) used for texture mapping and the 4x4 transformation matrices for ImplicitGeometries.

Caution

An affine transformation cannot be undone or reversed after the import using the Importer/Exporter.

Two elementary affine transformations are predefined: 1) Identity matrix (leave all geometry coordinates unchanged), which serves as an explanatory example of how values in the matrix should be set, and 2) Swap X/Y, which exchanges the values of x and y coordinates in all geometries (and thus performs a 90 degree rotation around the z axis). The latter is very helpful in correcting CityGML datasets that have northing and easting values in wrong order.

Example: For an ordinary translation of all city objects by 100 meters along the x-axis and 50 meters along the y-axis (assuming all coordinate units are given in meters), the identity matrix must be applied together with the translation values set as coefficients in the translation vector:

\[\begin{split}{\overrightarrow{p}}^{'} = \begin{bmatrix} 1 & 0 & 0 & 100 \\ 0 & 1 & 0 & 50 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix} \bullet \overrightarrow{p}\end{split}\]
Address

Note

These preference settings only apply to CityGML input files.

CityGML uses the OASIS Extensible Address Language (xAL) standard for the representation and exchange of address information. xAL provides a flexible and generic framework for encoding address data according to arbitrary address schemes. The columns of the ADDRESS table of the 3D City Database, however, only map the most common fields in address records (cf. Section 3.2). Moreover, the Importer/Exporter currently does not support arbitrary xAL fragments but is tailored to the parsing of the following two xAL templates that are taken from the CityGML specification.

<Address>
  <xalAddress>
    <!-- Bussardweg 7, 76356 Weingarten, Germany -->
    <xAL:AddressDetails>
      <xAL:Country>
        <xAL:CountryName>Germany</xAL:CountryName>
        <xAL:Locality Type="City">
          <xAL:LocalityName>Weingarten</xAL:LocalityName>
          <xAL:Thoroughfare Type="Street">
            <xAL:ThoroughfareNumber>7</xAL:ThoroughfareNumber>
            <xAL:ThoroughfareName>Bussardweg</xAL:ThoroughfareName>
          </xAL:Thoroughfare>
          <xAL:PostalCode>
            <xAL:PostalCodeNumber>76356</xAL:PostalCodeNumber>
          </xAL:PostalCode>
        </xAL:Locality>
      </xAL:Country>
    </xAL:AddressDetails>
  </xalAddress>
</Address>
<Address>
  <xalAddress>
    <!-- 46 Brynmaer Road Battersea LONDON, SW11 4EW United Kingdom -->
    <xAL:AddressDetails>
      <xAL:Country>
        <xAL:CountryName>United Kingdom</xAL:CountryName>
        <xAL:Locality Type="City">
          <xAL:LocalityName>LONDON</xAL:LocalityName>
          <xAL:DependentLocality Type="District">
            <xAL:DependentLocalityName>Battersea</xAL:DependentLocalityName>
            <xAL:Thoroughfare>
              <xAL:ThoroughfareNumber>46</xAL:ThoroughfareNumber>
              <xAL:ThoroughfareName>Brynmaer Road</xAL:ThoroughfareName>
            </xAL:Thoroughfare>
          </xAL:DependentLocality>
          <xAL:PostalCode>
            <xAL:PostalCodeNumber>SW11 4EW</xAL:PostalCodeNumber>
          </xAL:PostalCode>
        </xAL:Locality>
      </xAL:Country>
    </xAL:AddressDetails>
  </xalAddress>
</Address>

If xAL address information in a CityGML instance document does not comply with one of these templates (e.g., because of additional or completely different entries), the address information will only partially be stored in the database (if at all). To not lose any original address information, the entire <xal:AddressDetail> XML fragment can be imported “as is” from the input CityGML file and stored in the XAL_SOURCE column of the ADDRESS table in the 3D City Database. For this purpose, simply check the Import original <xal:AddressDetail> XML fragment option (this is the default value).

_images/impexp_import_preferences_address_fig.png

Import preferences – Address.

See Section 4.5.9.7 for how to export the xAL fragment from XAL_SOURCE.

Note

The Importer/Exporter always tries and populates the columns of the ADDRESS table (STREET, HOUSE_NUMBER, etc.) from the xAL address information independent of whether the <xal:AddressDetail> element shall be imported. Thus, the original XML representation is always imported in addition.

XML validation

Note

These preference settings only apply to CityGML input files.

On the Import tab of the operations window, the input files to be imported into the database can be manually validated against the official CityGML XML Schemas. This preference dialog lets a user choose to perform XML validation automatically with every database import.

_images/impexp_import_preferences_xml_validation_fig.png

Import preferences – XML validation.

In general, it is strongly recommended to ensure (either manually or automatically) that the input files are valid with respect to the CityGML XML schemas. Invalid files might cause the import procedure to behave unexpectedly or even to abort abnormally.

If XML validation is chosen to be performed automatically during imports, every invalid top-level feature will be discarded from the import. Nevertheless, the import procedure will continue to work on the remaining features in the input file(s). To track which features have been imported, you can enable the import log (see Section 4.4.6.11).

Validation errors are printed to the console window. Often, error messages quickly become lengthy and confusing. To keep the console output low, the user can choose to only report the first validation error per top-level feature and to suppress all subsequent error messages.

Note

The XML validation in general does not require internet access since the CityGML XML schemas are packaged with the Importer/Exporter. These internal copies of the official XML schemas will be used to check CityGML XML content in input files. The user cannot change this behavior. External XML schemas will only be considered in case of unknown XML content, which might require internet access. Precisely, the following rules apply:

  • If the namespace of an XML element is part of the official CityGML 2.0 or 1.0 standard, it will be validated against the internal copies of the official CityGML 2.0 or 1.0 schemas (no internet access required).
  • If the element’s namespace is unknown, the element will be validated against the schema pointed to by the xsi:schemaLocation value on the root element or the element itself. This is necessary when, for instance, the input document contains XML content from a CityGML Application Domain Extension (ADE). Note that loading the schema might require internet access.
  • If the element’s namespace is unknown and the xsi:schemaLocation value (provided either on the root element or the element itself) is empty, validation will fail with a hint to the element and the missing schema document.
XSL Transformation

Note

These preference settings only apply to CityGML input files.

The XSL Transformation settings are used to apply changes to the CityGML input data before it is imported into the database using XSL transformations. Simply check the Apply XSLT stylesheets option and point to an XSLT stylesheet in your local file system using the Browse button. The stylesheet will be automatically considered by the import process to transform the CityGML data.

_images/impexp_import_preferences_xsl_fig.png

Import preferences – XSL transformation.

By clicking the + and - buttons, more than one XSLT stylesheet can be provided. The stylesheets are executed in the given order, with the output of a stylesheet being the input for its direct successor. The Importer/Exporter is shipped with example XSLT stylesheets in subfolders below templates/XSLTransformations in the installation directory.

Note

  • To be able to handle arbitrarily large input files, the importer chunks every CityGML input file into top-level features, which are then imported into the database. Each XSLT stylesheet will hence just work on individual top-level features but not on the entire file. Make sure to consider this when developing your XSLT.
  • The output of each XSLT stylesheet must again be a valid CityGML structure.
  • Only stylesheets written in the XSLT language version 1.0 are supported.
CityJSON options

Note

These preference settings only apply to CityJSON input files.

_images/impexp_import_preferences_cityjson_fig.png

Import preferences – CityJSON options.

CityJSON offers an extension mechanism similar to CityGML Application Domain Extensions (ADEs). A CityJSON Extension allows to extend the core data model of CityJSON by adding new attributes to predefined city objects, defining new city object types and adding additional properties to the root of a document.

The 3D City Database has support for storing and managing CityGML ADEs in extra tables that seamlessly integrate with the 3DCityDB core schema (see Section 3.2.16). The mechanisms can also be used for CityJSON Extensions but then also requires a Java extension package for the Importer/Exporter (see Section 5.3.6).

In contrast to CityGML datasets, the Importer/Exporter can parse CityJSON Extensions in a more generic way and map new city object types to GenericCityObject instances and additional attributes of city objects to generic attributes. This way, the extension data is not lost and can be stored in the core 3DCityDB schema without the need for extra tables. Please note that the data will, of course, also be exported as generic objects and attributes again so that the original CityJSON Extension structure cannot be restored. If you want to handle CityJSON Extensions in this generic way, simply enable the corresponding option in the above preference settings dialog.

Indexes

In addition to the Database tab on the operations window, which lets you enable and disable spatial and normal indexes in the 3D City Database manually (cf. Section 4.3.3), this preference dialog lets you set a default index strategy for database imports.

_images/impexp_import_preferences_indexes_fig.png

Import preferences – Indexes.

The dialog differentiates between settings for spatial indexes [1] and normal indexes [2] but offers the same options for each index type.

The default setting is to not change the status (i.e., either enabled or disabled) of the indexes. This default behavior can be changed so that indexes are always disabled before starting an import process. The user can choose whether the indexes shall be automatically reactivated after the import has been finished.

Note

It is strongly recommended to deactivate the spatial indexes before running a CityGML/CityJSON import on a big amount of data and to reactive the spatial indexes afterwards. This way the import will typically be a lot faster than with spatial indexes enabled. The situation may be different when importing only a small dataset.

Caution

Activating and deactivating indexes can take a long time, especially if the database fill level is high. Note that the operation cannot be aborted by the user since this would result in an inconsistent database state.

Import log

An import process not necessarily works on all top-level features contained in the provided input file(s). An obvious reason is that spatial or thematic filters naturally narrow down the set of imported features. Also, in case the import procedure aborts early (either requested by the user or caused by severe errors), not all input features might have been processed. To understand which top-level features were actually loaded into the database during an import session, a user can choose to let the Importer/Exporter create an import log.

_images/impexp_import_preferences_log_fig.png

Import preferences – Import log.

Simply enable the checkbox on this settings dialog to activate import logs (disabled per default). You additionally must provide the full path to the log file that shall be used to record the imported features. Either type the file name manually or use the Browse button to open a file selection dialog. The following modes for creating the log file are supported:

  1. To ensure that every import operation uses a unique file name for the import log, you can choose to let the import process append a timestamp of the form yyyy-MM-dd_HH-mm-ss-SSS as suffix to the provided file name (default option). This way, every import operation will automatically be recorded in a separate log file. For example, when choosing import.log as file name, the import log will be stored in:

    import-yyyy-MM-dd_HH-mm-ss-SSS.log

  2. When disabling the default option, all import operations are logged into the same file. By default, new log entries are appended to the end of the file so that entries from previous import operations are not lost. You can alternatively choose to truncate the log file before every import operation by checking the corresponding option. Use this option with care.

The import log is a simple CSV file with one record (line) per imported top-level feature. The following figure shows an example.

_images/impexp_import_log_example_fig.png

Example import log.

The first three lines of the import log contain metadata about the version of the Import/Exporter that was used for the import, the database connection string, and the timestamp of the import. Each metadata line starts with the # character as comment marker.

The first line below the metadata block provides a header for the fields of each record. The field names are FEATURE_TYPE, CITYOBJECT_ID, GMLID_IN_FILE, and INPUT_FILE. A single comma separates the fields. The records follow the header line. The meaning of the fields is as follows.

Fields of the CSV import log file
Field name
Description
FEATURE_TYPE
An string representing the typename of the imported CityGML feature.
CITYOBJECT_ID
The value of the ID column (primary key) of the CITYOBJECT table where the feature was inserted.
GMLID_IN_FILE
The original object identifier of the feature in the input file. Note: the GMLID in the database might differ from the original identifier due to import settings.
INPUT_FILE
The path of the input file from which the feature was imported.

The last line of each import log is a footer that contains metadata about whether the import was successfully finished or aborted.

Hint

If an import process was aborted by the user or failed due to errors, the import log file can also be used to automatically resume or rollback the import operation. Thus, it helps you to ensure a consistent database state.

  • To resume the import, you can use the import log as input for the import list filter and set the filter to skip all city objects from the list (see Section 4.4.2). When re-running the import with these settings, only the city objects that have not been processed in the first run will be imported.
  • A rollback can be achieved by feeding the import log as delete list to the delete command of the Importer/Exporter command-line interface (see Section 4.9.5).
Resources
_images/impexp_import_preferences_resources_fig.png

Import preferences – Resources.

Multithreading settings

The software architecture of the Importer/Exporter is based on multithreading. Put simply, the different tasks of an import process are carried out by separate threads. The decoupling of compute bound from I/O bound tasks and their parallel non-blocking processing usually leads to an increase of the overall application performance. For example, threads waiting for database response do not block threads parsing the input document or processing the CityGML/CityJSON input features. In a multi-core environment, threads can even be executed simultaneously on multiple CPUs or cores.

The Resource settings allow for controlling the minimum and maximum number of concurrent threads during import [1]. Make sure to enter reasonable values depending on your hardware configuration. By default, the maximum number is set to the number of available CPUs/cores times two.

Caution

A higher number of threads does not necessarily result in a better performance. In contrast, a too high number of active threads faces disadvantages such as thread life-cycle overhead and resource thrashing. Also note that each thread requires its own physical connection to the database. Therefore, your database must be ready to handle enough parallel physical connections. Ask you database administrator for assistance.

Batch processing

In order to optimize database response times, multiple database statements are submitted to the database in a single request (batch processing). This allows for an efficient data processing on the database side. The user can influence the number of SQL statements in one batch through the settings dialog [2]. The dialog differentiates between batch sizes for top-level features (default: 20) and cache entries for object identifiers and temporary XLink information (default: 1000 each).

Note

All database operations within one batch are buffered in main memory before being submitted to the database. Thus, the Importer/Exporter might run out of memory if the batch size is too high (see Section 4.1 for how to increase the available main memory). After a batch is submitted, the transaction is committed.

Cache settings

The Importer/Exporter employs strategies for parsing CityGML datasets of arbitrary file size and for resolving XLink references. A naive approach for XLink resolving would read the entire CityGML dataset into main memory. However, CityGML datasets quickly become too big to fit into main memory. For this reason, the import process follows a two-phase strategy:

In a first run, features are written to the database neglecting references to remote objects. If a feature contains an XLink though, any context information about the XLink is written to temporary database tables. This information comprises, for instance, the table name and primary key of the referencing feature/geometry instance as well as the identifier (gml:id) of the target object. In addition, while parsing the document, the import process keeps track of every encountered gml:id as well as the table name and primary key of the corresponding object in database. It is important to record this information because it cannot be predicted a priori whether or not a gml:id is referenced by an XLink from somewhere else in the document. In order to ensure fast access, the information is cached in memory. If the maximum cache size is reached, the cache is drained to temporary database tables to prevent memory overflows.

In a second run, the temporary tables containing the context information about XLinks are revisited and queried. Since the entire CityGML document has been processed at this point in time, valid references can be resolved and processed accordingly. With the help of the object identifier cache, the referenced objects can be quickly identified within the database.

The caching and paging behaviour for object identifiers can be influenced via the Resource preferences [3]. The dialog lets a user enter the maximum number of identifiers to be held in main memory (default: 200,000 entries), the percentage of entries that will be written to the database if the cache limit is reached (page factor, default: 85%), as well as the number of parallel temporary tables used for paging (table partitions, default: 10). The Importer/Exporter uses separate caches for identifiers of geometries and features [3]. Moreover, a third cache is used for handling texture atlases and offers similar settings [4].

Note

By default, the temporary tables for draining the caches are created in the same 3D City Database instance. You can also choose to use a local cache instead (see Section 4.7.3 for more details). However, note that some temporary information must be stored in the database even if you use a local cache to be able to perform JOINs between temporary tables and tables of the 3DCityDB schema.

To load 3D city model content into a 3D City Database instance, the Importer/Exporter supports the import of CityGML and CityJSON files on the Import tab of the operations window.

_images/impexp_CityGML_import_dialog_fig.png

The import dialog.

Input files and formats

The list of files to be imported must be provided at the top of the import dialog [1]. Files can be selected through clicking on the Browse button. Alternatively, you can drag&drop files from your preferred file explorer onto the Import tab. If the file list already contains entries, the drag&drop operation will replace them by default. If you want to keep the previous entries and only append additional files, keep the CTRL key pressed while dropping (on Windows). The Remove button or DEL key lets you remove selected entries from the input files. Note that adding folders to the list is also supported. Each folder will be recursively scanned for input files to be imported.

The import operation supports the following file formats and extensions:

Supported input file formats and extensions
Format
File extensions
CityGML versions 2.0, 1.0, and 0.4
*.gml, *.xml
CityJSON version 1.0.x
*.json, *.cityjson
GZIP compressed files
*.gz, *.gzip
ZIP archives
*.zip

The file formats are mainly detected based on the file extension, so please make sure to use one of the supported file extensions from Table 4.7. ZIP archives are recursively scanned for contained CityGML and CityJSON files. Additional files referenced from the CityGML/CityJSON files such as texture images will also be imported into the database if the references can be correctly resolved during import. This also holds true if the additional files are located inside the same ZIP archive as the CityGML/CityJSON files.

Caution

While even large CityGML files can be read in a streaming fashion (i.e., one top-level feature after the other), large parts of a CityJSON file must be kept in main memory while reading the entire file. To avoid memory issues, make sure the file size of the CityJSON input file is small enough, otherwise the import process will terminate with an exception. You can also increase the available memory for the Importer/Exporter application (see Section 4.1).

Import filters

The import dialog allows for setting thematic and spatial filters to narrow down the set of top-level city objects that are to be imported from the input files [2]. The following filters are offered and discussed in separate sections of this chapter:

To enable a filter, simply select its checkbox. This will automatically make the filter dialog visible. Make sure to provide the mandatory input for the filter to work correctly. If more than one filter is enabled, the filters are combined in a logical AND operation, i.e. all filter criteria must be fulfilled for a city object to be imported. If no checkbox is enabled, no filters are applied and, thus, all features contained in the input files will be imported.

Note

All import filters are only applied to top-level features but not to nested sub-features.

Schema validation

Before importing, the input files can be validated against the official CityGML XML and CityJSON schemas. Simply click the Just Validate button [4] in order to run the validation process. Filter settings are not considered in this process. Note that this operation does not require internet access since the schemas are packaged with the application. The features from the input files are not imported into the database during validation. The validation results are printed to the console window.

Note

It is strongly recommended that only input files having successfully passed the validation are imported into the database. Otherwise, errors in the data may lead to unexpected behavior, error messages or even abnormal termination of the import process.

Note

CityGML ADE schemas are automatically considered in the validation process if the ADE has been correctly registered with the Importer/Exporter (see Section 5.3 for more details). This way, also ADE data can be checked before importing. CityJSON Extension schemas are, however, not supported by the validation process. Please use an external tool like cjio to validate such datasets.

Import preferences

More fine-grained preference settings affecting the import operation are available on the Preferences tab of the operations window [5]. Make sure to check these settings before starting the import process. A full documentation of the import preferences is provided in Section 4.4.6. The following table provides a summary overview.

Summary overview of the import preferences
Preference name
Description
General options like behaviour in error situations to be used for imports.
Metadata that is stored for every object in the database such as the data lineage, the updating person or the creationDate property.
Generates UUIDs where object identifiers are missing on input features or replaces all identifiers with UUIDs.
Defines whether appearance information should be imported.
Allows for applying an affine transformation to the input geometry.
(CityGML only)
Controls the way in which xAL address fragments are imported into the database.
(CityGML only)
Performs XML validation automatically and excludes invalid features from being imported.
(CityGML only)
Defines one or more XSLT stylesheets that shall be applied to the city objects in the given order before import.
Defines import options for CityJSON input files.
Settings for automatically enabling/disabling spatial and normal
indexes during imports.
Creates a list of all successfully imported CityGML top-level features.
Allocation of computer resources used in the import operation.

Starting the import process

Once all import settings are correct, the Import button [3] starts the import process. If a database connection has not been established manually beforehand, the currently selected entry on the Database tab is used to connect to the 3D City Database. The separate steps of the import process as well as all errors and warnings that might occur during the import are reported to the console window, whereas the overall progress is shown in a separate status window. The import process can be aborted at any time by pressing the Cancel button in the status window. The Importer/Exporter will make sure that all pending city objects are completely imported before it terminates the import process.

After having completed the import, a summary of the imported CityGML top-level features is printed to the console window.

Caution

The Importer/Exporter does not check by any means whether a top-level feature from an input file already exists in the database. Thus, if an import is executed twice on the same dataset, all CityGML features contained in the dataset will be imported twice.

One way to avoid duplicate features might be, for instance, to manually set a UNIQUE constraint on the GMLID column of the CITYOBJECT table.

Hint

To improve the speed of the import operation for a very large number of features (bulk imports), the spatial indexes can be disabled before and re-activated after the import. For a smaller number of features though, disabling and enabling the spatial indexes might take longer than the actual import itself. Normal indexes should never be disabled before an import.

Importing into version-enabled tables under Oracle typically takes considerably more time than importing into non-version-enabled tables.

Note

The import operation does not automatically apply a coordinate transformation to the internal reference system of the 3D City Database instance. Thus, if the coordinate reference system of the CityGML input data does not match the coordinate reference system defined for the 3D City Database instance, the user must transform the coordinate values before importing the data (or use an affine transformation during import if this is enough). A possible workaround procedure can be realized as follows:

  1. Set up a second (temporary) instance of the 3D City Database with an internal CRS matching the CRS of the CityGML instance document.
  2. Import the dataset into this second 3D City Database instance.
  3. Export the data from this second instance into the target CRS by applying a coordinate transformation (see CityGML export documentation in Section 4.5).
  4. The exported CityGML document now matches the CRS of the target 3D City Database instance and can be imported into that database. The temporary database instance can be dropped.

Alternatively, you can change the reference system in the database to the one used by the imported geometries (see the corresponding database operation in Section 4.3.4).

Export

Feature version filter

In both CityGML and CityJSON, the temporal creationDate and terminationDate attributes can be used to represent different versions of the same feature that are valid at different points in time. The 3D City Database allows for storing multiple versions of the same feature to enable object histories. The timestamps are stored in the CREATION_DATE and TERMINATION_DATE columns of the CITYOBJECT table.

Using the feature version filter, a user can choose which version of the top-level features should be selected in an export operation.

_images/impexp_export_feature_version_filter.png

Feature version filter for export operations.

The different feature version options available from the drop-down list are described below.

Overview of the different feature version options
Feature Version
Description
Latest version
Selects top-level features that are not marked as terminated in the database and, thus, whose TERMINATION_DATE attribute is null.
Valid version
Selects top-level features that were valid at a given timestamp or for a given time range. The filter is evaluated against the CREATION_DATE and TERMINATION_DATE attributes.
Terminated version
Selects only terminated top-level features. You can choose to either select all terminated features or only those that were terminated at a given timestamp. The filter is evaluated against the TERMINATION_DATE attribute.

For example, you can use Valid version to query a past status of your 3D city model (e.g., at March 1st, 2018) and compare it to the current version.

Note

For the feature version filter to work correctly, you must make sure that the validity times of subsequent feature versions do not overlap. The Importer/Exporter does not provide specific tools for managing feature versions in the database.

Hint

If your 3D City Database does not contain multiple feature versions, you should always disable the feature version filter to avoid unnecessarily complex SQL queries.

Caution

Typically, the same object identifier is used for the different feature versions to be able to relate them to each other. If your data is structured like this and you export all versions to the same output file (e.g., by disabling the feature version filter), then this will result in duplicate gml:id values for CityGML and, thus, the output file will be invalid according to the CityGML XML schema. You can still process the data though.

For CityJSON, the object identifier is used as key for the "CityObjects" property. Thus, it is not possible to write multiple city objects sharing the same identifier to the same file. As a consequence, the output file will always contain just one version but not all versions.

Attribute filter

The attribute filter lets you define values for the object identifier, gml:name and citydb:lineage, which must be matched by a top-level feature to be exported.

_images/impexp_export_attribute_filter.png

Attribute filter for export operations.

More than one identifier can be provided in a comma-separated list. Multiple gml:name and citydb:lineage values are not supported though.

Both the gml:name and citydb:lineage search strings support two wildcard characters: “*” representing zero or more characters and “.” representing a single character. You can use the escape character “" to escape the wildcards. For example, if you provide *abc for the gml:name filter, then features with a gml:name of “xyzabc” and “abc” will both be exported. If you enter \*abc instead, the gml:name must exactly match “*abc” for the feature to be exported.

SQL filter

The SQL filter offers a powerful way to query top-level features based on a user-defined SELECT statement.

_images/impexp_SQL_query_dialog_fig.png

SQL filter for export operations.

The SQL query is entered in [1]. The + and - buttons [2] on the right side of the input field allow for increasing or reducing the size of the input field.

In general, any SELECT statement supported by the underlying database system can be used as SQL filter. The query may operate on all tables and columns of the database and may involve any database function or operator. The SQL filter therefore provides a high degree of flexibility for querying content from the 3DCityDB based on your filter criteria.

The only mandatory requirement is that the SQL query must return a list of database IDs of the selected city objects. Put differently, the result set returned by the query may only contain a single column with references to the ID column of the CITYOBJECT table. The name of the result column can be freely chosen, and the result set may contain duplicate ID values. Of course, it must also be ensured that the SELECT statement follows the specification of the database system.

The following example shows a simple query that selects all city objects having a generic attribute of name energy_level with a double value less than 12.

select
    cityobject_id
from
    cityobject_genericattrib
where
    attrname='energy_level' and realval < 12

The CITYOBJECT_ID column of CITYOBJECT_GENERICATTRIB stores foreign keys to the ID column of CITYOBJECT. The return set therefore fulfills the above requirement.

Note that you do not have to care about the type of the city objects belonging to the ID values in the return set. Since the SQL filter is evaluated together with all other filter settings on the Export tab, the export operation will automatically make sure that only top-level features in accordance with the feature type filter are exported. For example, the above query might return ID values of buildings, city furniture, windows or traffic surfaces. If, however, only buildings have been chosen in the feature type filter, then all ID values in the result set not belonging to buildings will be ignored. This allows writing generic queries that can be reused in different filter combinations. Of course, you may also limit the result set to specific city objects if you like.

The following example illustrates a more complex query selecting all buildings having at least one door object.

select
     t.building_id
from
     thematic_surface t
inner join
     opening_to_them_surface o2t on o2t.thematic_surface_id = t.id
inner join
     opening o on o.id = o2t.opening_id
where
     o.objectclass_id = 39
group by
     t.building_id
having
     count(distinct o.id) > 0

Caution

Other statements than SELECT such as UPDATE, DELETE or DDL commands will be rejected and yield an error message. However, in principle, it is possible to create database functions that can be invoked with a SELECT statement and that delete or change content in the database. An example are the DELETE functions offered by the 3DCityDB itself (cf. Section 3.5.8). For this reason, the export operation scans the SQL filter statement for these well-known DELETE functions and refuses to execute them. However, similar functions can also be created after setting up the 3DCityDB schema and thus are not known to the export operation a priori. If such functions exist and a user of the Importer/Exporter shall not be able to accidentally invoke them through an SQL query, it is strongly recommended that the user may only connect to the 3DCityDB instance via a read-only user (cf. Section 3.4.2).

LoD filter

The Level-of-Detail (LoD) filter allows for exporting only specific LoDs of the city objects.

_images/impexp_export_lod_filter.png

LoD filter for export operations.

Simply enable the checkboxes of the LoDs that you want to export from the database. If you select more than one LoD, the Filter mode drop-down list lets you choose how the selected LoDs should be evaluated.

Filter modes for combining LoDs
Filter mode
Description
Or
City objects having a spatial representation in at least one of the selected LoDs will be exported. Additional LoD representations of the city object that do not match the user selection are not exported.
And
Only city objects having a spatial representation in all of the selected LoDs will be exported. Additional LoD representations of the city object that do not match the user selection are not exported.
Minimum LoD
This is a special version of the Or mode that only exports the lowest LoD representation from the matching ones. The exported LoD may therefore differ for each city object.
Maximum LoD
This is a special version of the Or mode that only exports the highest LoD representation from the matching ones. The exported LoD may therefore differ for each city object.

Many feature types in both CityGML and CityJSON can have nested sub-features. In such cases, the top-level feature itself is not required to have a spatial property, but the geometry can be modelled for its nested sub-features. For example, a CityGML bldg:Building feature does not need to provide an LoD 2 geometry through its own bldg:lod2Solid or bldg:lod2MultiSurface properties. Instead, it can have a list of nested boundary surfaces such as bldg:WallSurface and bldg:RoofSurface features that have own LoD 2 representations. Nevertheless, in this case the bldg:Building is considered to be represented in LoD 2.

To handle these cases, the LoD filter provides the search depth parameter to specify how many levels of nested features shall be considered when searching for matching LoD representations. The default value of “1” means that the top-level feature itself and all its direct child features (i.e., features on the first nesting level) are searched for matching LoD representations. If an LoD representation is found for any (transitive) sub-feature, then the top-level feature is considered to satisfy the filter condition. If you pick the wildcard “*” for the search depth, all nested objects independent of their nesting level are considered.

For example, the following CityGML bldg:Building feature has a nested bldg:BuildingInstallation sub-feature and a nested bldg:WallSurface sub-feature. Moreover, the bldg:BuildingInstallation itself has a nested bldg:RoofSurface sub-feature.

<bldg:Building><bldg:outerBuildingInstallation>
    <bldg:BuildingInstallation>
      <bldg:boundedBy>
        <bldg:RoofSurface></bldg:RoofSurface>
      </bldg:boundedBy>
    </bldg:BuildingInstallation>
  </bldg:outerBuildingInstallation><bldg:boundedBy>
    <bldg:WallSurface></bldg:WallSurface>
  </bldg:boundedBy></bldg:Building>

When setting search depth to “1” in this example, not only the bldg:Building but also its nested bldg:BuildingInstallation and bldg:WallSurface are searched for a matching LoD representation, but not the bldg:RoofSurface of the bldg:BuildingInstallation. This roof surface is on the nesting depth 2 when counted from the bldg:Building. Thus, search depth would have to be set to “2” to also consider this bldg:RoofSurface feature.

Caution

The higher you choose the search depth, the more joins and sub queries are required on the database to consider all nested features to the specified depth. This might result in a slower query performance.

Feature counter filter

The feature counter filter limits the number of top-level features to be exported.

_images/impexp_export_feature_counter_filter.png

Feature counter filter for export operations.

Simply enter the number of features into the count field. The start index parameter indicates the index within the result set from which the export shall begin. The parameters can be used together or individually.

Note

The start index uses zero-based numbering. Thus, the first top-level feature is assigned the index 0, rather than the index 1.

Note

When using the feature counter filter with tiled exports, the count and start index settings are applied to each tile but not to set of all features from all tiles. For example, if you set count to 10, every tile will contain up to 10 features. The total number of exported features will therefore be greater than 10.

Bounding box filter

The bounding box filter takes a 2D bounding box as parameter that is given by the coordinate values of its lower left (xmin, ymin) and upper right (xmax, ymax) corner. It is evaluated against the ENVELOPE column of the CITYOBJECT table.

_images/impexp_export_bbox_filter.png

Bounding box filter for export operations.

You can choose whether features whose envelopes overlap with the provided bounding box are to be exported (default), or whether their envelope must be inside. Alternatively, the export can be tiled by splitting the bounding box into a regular grid. The number of rows and columns of this grid can be defined by the user. Each tile is exported to its own output file. To make sure that every city object is assigned to one tile only, the center point of its envelope is checked to be either inside or on the left or top border of the tile.

Similar to the import operation, the coordinate values of the bounding box filter can either be entered manually or chosen interactively in a 2D map window. To open the map window, click on the map button map_select. A comprehensive guide on how to use the map window is provided in chapter Section 4.8.

Note

When choosing a spatial bounding filter, make sure that spatial indexes are enabled (use the index operation on the Database tab to check the status of indexes, cf. Section 4.3.3).

Note

If the entire 3D city model stored in the 3DCityDB instance shall be exported with tiling enabled, then a bounding box spanning the overall area of the model must be provided. This bounding box can be easily calculated on the Database tab (cf. Section 4.3.2).

Note

Using the center point of the envelope as criterion for a tiled export has a side-effect when tiling is combined with the counter filter: the number of city objects on the tile can be less than the number of city objects returned by the database query because the tile check happens after the objects have been queried. Therefore, the counter filter only sets a possible maximum number in this filter combination. This is a correct behavior, so the Importer/Exporter will not report any errors.

Feature type filter

With the feature types filter, you can restrict the export to one or more features types by enabling the corresponding checkboxes. Only features of the selected type(s) will be exported.

_images/impexp_export_feature_type_filter.png

Feature type filter for export operations.

The feature type filter only shows top-level feature types. It will automatically contain feature types from CityGML ADEs if a corresponding ADE extension has been correctly registered with the Importer/Exporter (see Section 5.3).

Note

When exporting city object groups, the following additional rules apply:

  1. If only the feature type CityObjectGroup is checked, then all city object groups together with all their group members (independent of their feature types) are exported.
  2. If further feature types are selected in addition to CityObjectGroup, then only group members matching those feature types are exported. Of course, all features that match the type selection but are not group members are also exported.

XML query expressions

<typeNames> parameter

The <typeNames> parameter lists the name of one or more feature types to query from the 3DCityDB. Each name is given as xsd:QName and must use an official XML namespace from CityGML 2.0 or 1.0. Only top-level feature types are supported. The CityGML version of the associated XML namespace determines the CityGML version used for the export dataset. Namespaces from different CityGML versions shall not be mixed.

The following example shows how to query CityGML 2.0 bridges and buildings:

<query xmlns="http://www.3dcitydb.org/importer-exporter/config">
  <typeNames>
    <typeName xmlns:brid="http://www.opengis.net/citygml/bridge/2.0">brid:Bridge</typeName>
    <typeName xmlns:bldg="http://www.opengis.net/citygml/building/2.0">bldg:Building</typeName>
  </typeNames>
</query>

If you want to query all feature types, then simply use the name core:_CityObject of the abstract base type in CityGML, or just skip the <typeNames> parameter.

The following table shows all supported top-level feature types together with their official CityGML XML namespace(s) and their recommended XML prefix.

Supported CityGML top-level feature types together with their XML namespace.
Feature type
XML prefix
XML namespace
_CityObject
core
Building
bldg
Bridge
brid
Tunnel
tun
TransportationComplex
tran
Road
tran
Track
tran
Road
tran
Square
tran
Railway
tran
CityFurniture
frn
LandUse
luse
WaterBody
wtr
PlantCover
veg
SolitaryVegetationObject
veg
ReliefFeature
dem
GenericCityObject
gen
CityObjectGroup
grp

In order to simplify typing the <typeNames> parameter, you can skip the namespace declaration from the type names. The Importer/Exporter will then assume the corresponding CityGML 2.0 namespace, but only if you use the recommended XML prefix from the table above. The listing below exemplifies how to use this simplification to query all city furniture objects from the 3DCityDB.

<query>
  <typeNames>
    <typeName>frn:CityFurniture</typeName>
  </typeNames>
</query>
<propertyNames> projection clause

The <propertyNames> parameter identifies a subset of optional feature properties that shall be kept or removed in the target dataset. Property projections can be defined for all feature types that are part of the export, and thus not just for top-level feature types but also for nested feature types.

The <propertyNames> parameter consists of one ore more <context> child elements, each of which must define the target feature type through the typeName attribute. A context then lists the name of one ore more feature properties and/or generic attributes. The mode attribute determines the action for these properties: 1) if set to keep, then only the listed properties are kept in the target dataset, and all other properties are deleted from the feature (default); 2) if set to remove, then only the listed properties are deleted from the feature, and all other properties are kept.

The following listing shows an example in which only the properties bldg:measuredHeight and bldg:lod2Solid shall be exported for bldg:Building features (mode = keep). Note that this implies that all other thematic and spatial properties of buildings are deleted. For bldg:WallSurface features, all properties shall be kept besides the generic measure attribute area (mode = remove).

<query>
  <propertyNames>
    <context typeName="bldg:Building" mode="keep">
      <propertyName>bldg:measuredHeight</propertyName>
      <propertyName>bldg:lod2Solid</propertyName>
    </context>
    <context typeName="bldg:WallSurface" mode="remove">
      <genericAttributeName type="measureAttribute">area</genericAttributeName>
    </context>
  </propertyNames>
</query>

The typeName of the target feature type must be given as xsd:QName. Like for the <typeNames> parameter, the XML namespace declaration can be skipped if XML prefixes from Table 4.11 are used. Multiple <context> elements for the same typeName are not allowed.

Each propertyName must reference a valid property of the given feature type. This includes properties that are defined for the feature type or inherited from a parent type in the CityGML schemas, but also properties injected through an ADE. The propertyName is given as xsd:QName. Mandatory properties like gml:id cannot be removed.

Generic attributes are also referenced by their name using a genericAttributeName element. The name is case sensitive and thus must exactly match the name stored in the database. The optional type attribute can be used to more precisely specify the target generic attribute. If type is omitted, then all generic attributes matching the name are kept or removed, independent of their type. If you want to address all generic attributes of a given type but independent of their name, then use a propertyName instead as illustrated below. In this example, all gen:stringAttributes are removed from bldg:Building.

<query>
  <propertyNames>
    <context typeName="bldg:Building" mode="remove">
      <propertyName>gen:stringAttribute</propertyName>
    </context>
  </propertyNames>
</query>

The typeName may also point to an abstract feature type such as bldg:_AbstractBuilding or core:_CityObject. The property projection is then applied to all subtypes and can even be refined on the level of individual subtypes if the value of the mode attribute is identical. If mode differs, then the context of the subtype overrides the context of the (abstract) supertype.

The listing below shows how to remove gml:name and generic attributes of name location from all city objects by defining a projection context for the abstract type core:_CityObject. The projection is refined for bldg:Building by additionally removing bldg:measuredHeight.

<query>
  <propertyNames>
    <context typeName="core:_CityObject" mode="remove">
      <propertyName>gml:name</propertyName>
      <genericAttributeName>location</genericAttributeName>
    </context>
    <context typeName="bldg:Building" mode="remove">
      <propertyName>bldg:measuredHeight</propertyName>
    </context>
  </propertyNames>
</query>

If mode would be switched to keep on the bldg:Building context in the above example, then this would override the core:_CityObject settings for buildings. Thus, buildings would only keep the bldg:measuredHeight property. The core:_CityObject context would, however, still apply to all other city objects besides buildings.

<filter> selection clause

The <filter> parameter is used to identify a subset of city objects from the 3DCityDB whose property values satisfy a set of logically connected predicates. If the property values of a city object satisfy all the predicates in a filter, then that city object is part of the export.

Predicates can be expressed both on properties of the top-level feature types listed by the <typeNames> parameter and on properties of their nested feature types. If the predicates are not satisfied, then the entire top-level feature is not exported.

If the <typeNames> parameter lists more than one top-level feature type, then predicates may only be expressed on properties common to all of them.

The <filter> parameter supports comparison operators, spatial operators and logical operators. The meaning of the operators is identical to the operators defined in the OGC Filter Encoding (FE) 2.0 standard, but their encoding slightly differs.

Most expressions are formed using a valueReference pointing to a property value and a literal value that is checked against the property value.

Value references

A value reference is a string that represents a value that is to be evaluated by a predicate. The string can be the name of a property of the feature type or an XML Path Language (XPath) expression that represents the property of a nested feature type or a complex property.

Property names are given as xsd:QName. Examples for valid property names are core:creationDate, bldg:measuredHeight, and tun:lod2MultiSurface.

In cases where a property of a nested feature type or complex property shall be evaluated, the value reference must be encoded using XPath. The XPath expression is to be formulated based on the XML encoding of CityGML. Note that the Importer/Exporter only supports a subset of the full XPath language:

  • Only the abbreviated form of the child and attribute axis specifier is supported.
  • The context node is the top-level feature type to be exported. In case two or more top-level feature types are listed by the <typeNames> parameter, then the context node is their common parent type.
  • Each step in the path may include an XPath predicate of the form “.=value” or “child=value”. Equality tests can be logically combined using the “and” or “or” operators. Indexes are not supported as XPath predicate.
  • The schema-element() function is supported. It takes the xsd:QName of a feature type as parameter. The function selects the given feature type and all its subtypes.
  • The last step of the XPath must be a simple thematic attribute or a spatial property. Property elements that contain a nested feature are not allowed as last step.

Assuming that bldg:Building is the top-level feature type to be exported, then the following examples are valid XPath expressions:

  • gen:stringAttribute/@gen:name selects the gen:name attribute of the generic string attributes of the building
  • gen:stringAttribute[@gen:name=’area’]/gen:value selects the gen:value of a generic string attribute with the name “area”
  • bldg:boundedBy/bldg:WallSurface/bldg:lod2MultiSurface selects the spatial LoD2 representation of the wall surfaces of the building
  • bldg:boundedBy/bldg:WallSurface[@gml:id='ID_01' or gml:name='wall']/bldg:opening/bldg:Door/gml:name selects the gml:name of doors that are associated with wall surfaces having a specific gml:id or gml:name
  • bldg:boundedBy/schema-element(bldg:_BoundarySurface)/core:creationDate selects the core:creationDate attribute of all boundary surfaces of the building
  • core:externalReference[core:informationSystem='http://somewhere.de']/core:externalObject/core:name selects the core:name of the external object in an external reference to a given information system
  • gen:genericAttributeSet[@gen:name='energy']/gen:measureAttribute/gen:value selects the gen:value of all generic measure attributes contained in the generic attribute set named “energy”

Note

CityGML uses the eXtensible Address Language (xAL) to encode addresses of buildings, bridges and tunnels. xAL is very flexible and allows an address to be encoded in different ways, which makes XPath expressions complex to write. For this reason, the Importer/Exporter uses a simple ADE that can be used in XPath expressions to evaluate address elements such as the street or city name. More information is provided in Section 4.5.8.10.

Literals and geometric values

Literals are explicitly stated values that are evaluated against a valueReference. The type of the literal value must match the type of the referenced value.

If the literal value is a geometric value, the value must be encoded using one of the geometry types offered by the query language. The following geometry types are available:

  • <envelope>
  • <point>
  • <lineString>
  • <polygon>
  • <multiPoint> (list of <point> elements)
  • <multiLineString> (list of <lineString> elements)
  • <multiPolygon> (list of <polygon> elements)

An <envelope> is defined by its <lowerCorner> and <upperCorner> elements that carry the coordinate values. The coordinates of a <point> are provided by a <pos> element, whereas <lineString> uses a <posList> element. A <polygon> can have one <exterior> and zero or more <interior> rings. Rings are supposed to be closed meaning that the first and the last coordinate tuple in the list must be identical. Interior rings must be defined in opposite direction compared to the exterior ring.

The dimension of the points contained in a <posList> as well as in <exterior> and <interior> rings can be denoted using the dimension attribute. Valid values are 2 (default) or 3.

Every geometry type offers an optional srid attribute to reference an SRID defined in the underlying database. If srid is present, then the coordinate tuples are assumed to be given in the reference system associated with the corresponding SRID, which is also used in coordinate transformations. If srid is not present, then the coordinate tuples are assumed to be given in the SRID of the 3DCityDB instance.

2D bounding box
<envelope>
  <lowerCorner>30 10</lowerCorner>
  <upperCorner>60 20</upperCorner>
</envelope>
2D point
<point>
  <pos>30 10</pos>
</point>
2D line string given in SRID 4326
<lineString srid="4326">
  <posList dimension="2">45.67 88.56 55.56 89.44</posList>
</lineString>
2D polygon with hole
<polygon>
  <exterior>35 10 45 45 15 40 10 20 35 10</exterior>
  <interior>20 30 35 35 30 20 20 30</interior>
</polygon>
Comparison operators

A comparison operator is used to form expressions that evaluate the mathematical comparison between two arguments. The following binary comparisons are supported:

  • <propertyIsEqualTo> (=)
  • <propertyIsLessThan> (<)
  • <propertyIsGreaterThan> (>)
  • <propertyIsEqualTo> (=)
  • <propertyIsLessThanOrEqualTo> (<=)
  • <propertyIsGreaterThanOrEqualTo> (>=)
  • <propertyIsNotEqualTo> (<>)

The optional matchCase attribute can be used to specify how string comparisons should be performed. A value of true means that string comparisons shall match case (default), false means caseless.

The following example shows how to export all buildings from the 3DCityDB whose bldg:measuredHeight attribute has a values less than 50.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <filter>
    <propertyIsLessThan>
      <valueReference>bldg:measuredHeight</valueReference>
      <literal>50</literal>
    </propertyIsLessThan>
  </filter>
</query>

Besides these default binary operators, the following additional comparison operators are supported:

  • <propertyIsLike>
  • <propertyIsNull>
  • <propertyIsBetween>

The <propertyIsLike> operator expresses a string comparison with pattern matching. A combination of regular characters, the wildCard character (default: *), the singleCharacter (default: .), and the escapeCharacter (default: \) define the pattern. The wildCard character matches zero or more characters. The singleCharacter matches exactly one character. The escapeCharacter is used to escape the meaning of the wildCard, singleCharacter and escapeCharacter itself. The matchCase attribute is also available for the <propertyIsLike> operator.

The following example shows how to find all roads whose gml:name contains the string “main”.

<query>
  <typeNames>
    <typeName>tran:Road</typeName>
  </typeNames>
  <filter>
    <propertyIsLike wildCard="*" singleCharacter="." escapeCharacter="\" matchCase="false">
      <valueReference>gml:name</valueReference>
      <literal>*main*</literal>
    </propertyIsLike>
  </filter>
</query>

The <propertyIsNull> operator tests the specified property to see if it exists for the feature type being evaluated.

The <propertyIsBetween> operator is a compact way of expressing a range check. The lower and upper boundary values are inclusive. The operator is used below to find all buildings having between 10 and 20 storeys.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <filter>
    <propertyIsBetween>
      <valueReference>bldg:storeysAboveGround</valueReference>
      <lowerBoundary>10</lowerBoundary>
      <upperBoundary>20</upperBoundary>
    </propertyIsBetween>
  </filter>
</query>
Spatial operators

A spatial operator determines whether its geometric arguments satisfy the stated spatial relationship. The following operators are supported:

  • <bbox>
  • <equals>
  • <disjoint>
  • <touches>
  • <within>
  • <overlaps>
  • <intersects>
  • <contains>
  • <dWithin>
  • <beyond>

The semantics of the spatial operators are defined in OGC Filter Encoding 2.0, 7.8.3, and in ISO 19125-1:2004, 6.1.14.

The valueReference of the spatial operators must point to a geometric property of the feature type or its nested feature types. If valueReference is omitted, then the gml:boundedBy property is used per default.

The listing below exemplifies how to use the <bbox> operator to find all city objects whose envelope stored in gml:boundedBy is not disjoint with the given geometry.

<query>
  <filter>
    <bbox>
      <envelope>
        <lowerCorner>30 10</lowerCorner>
        <upperCorner>60 20</upperCorner>
      </envelope>
    </bbox>
  </filter>
</query>

The following example exports all buildings having a nested bldg:GroundSurface feature whose bldg:lod2MultiSurface property intersects the given 2D polygon.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <filter>
    <intersects>
      <valueReference>bldg:boundedBy/bldg:GroundSurface/bldg:lod2MultiSurface</valueReference>
      <polygon>
        <exterior>35 10 45 45 15 40 10 20 35 10</exterior>
      </polygon>
    </intersects>
  </filter>
</query>

The last example demonstrates how to find all city furniture features whose envelope geometry is within the distance of 80 meters from a given point location. The uom attribute denotes the unit of measure for the distance. If uom is omitted, then the unit is taken from the definition of the associated reference system. If the reference system lacks a unit definition, meter is used as default value.

<query>
  <typeNames>
    <typeName>frn:CityFurniture</typeName>
  </typeNames>
  <filter>
    <dWithin>
      <valueReference>gml:boundedBy</valueReference>
      <point srid="4326">
        <pos>45.67 88.56</pos>
      </point>
      <distance uom="m">80</distance>
    </dWithin>
  </filter>
</query>
Logical operators

A logical operator can be used to combine one or more conditional expressions. The logical operator <and> evaluates to true if all the combined expressions evaluate to true. The operator <or> operator evaluates to true is any of the combined expressions evaluate to true. The <not> operator reverses the logical value of an expression. Logical operators can contain nested logical operators.

The following <and> filter combines a <propertyIsLessThan> comparison and a spatial <dWithin> operator to find all buildings with a bldg:measuredHeight less than 50 and within a distance of 80 meters from a given point location.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <filter>
    <and>
      <propertyIsLessThan>
        <valueReference>bldg:measuredHeight</valueReference>
        <literal>50</literal>
      </propertyIsLessThan>
      <dWithin>
        <valueReference>gml:boundedBy</valueReference>
        <point srid="4326">
          <pos>45.67 88.56</pos>
        </point>
        <distance uom="m">80</distance>
      </dWithin>
    </and>
  </filter>
</query>
Object identifier operator

The <resourceIds> operator is a compact way of finding city objects whose object identifier is contained in the provided list of <id> elements. The provided identifiers are checked against the gml:id property in CityGML. In CityJSON, city objects are stored in the "CityObjects" property. The value of this property is a collection of key-value pairs, where the key is the identifier of the city object, and the value is the city object itself.

The example below exports all buildings whose identifier matches one of the values in the list.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <filter>
    <resourceIds>
      <id>ID_01</id>
      <id>ID_02</id>
      <id>ID_03</id>
    </resourceIds>
  </filter>
</query>
Database ID operator

In addition to the object identifier, you can also select city objects based on their database ID. The provided ID values are tested against the ID column of the CITYOBJECT table and only top-level objects having a matching ID value are exported.

The following snippet exemplifies the use of the <databaseIds> filter.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <filter>
    <databaseIds>
      <id>1</id>
      <id>4034</id>
      <id>12334</id>
    </databaseIds>
  </filter>
</query>
SQL operator

The <sql> operator lets you add arbitrary SQL queries to your filter expression. It can be combined with all other predicates.

The SQL query is provided in the <select> subelement. It must follow the same rules as discussed in chapter Section 4.5.3. Most importantly, the query shall return a list of id values that reference the ID column of the table CITYOBJECT.

Note

Note that the query is encoded in XML. Thus, characters having special meaning in the XML language must be encoded using entity references. For example, the less-than sign < and greater-than sign > must be encoded as &lt; and &gt; respectively. Instead of using entity references, you can put your SQL string into a CDATA section. The string is then parsed as purely character data.

For example, the following SQL filter expression selects all id values from city objects having a generic attribute called energy_level whose double value is less than 12. The entity reference &lt; must be used here.

<query>
  <filter>
    <sql>
      <select>select cityobject_id from cityobject_genericattrib
        where attrname='energy_level' and realval &lt; 12</select>
    </sql>
  </filter>
</query>

When putting the same query into a CDATA section, the less-than sign must not be replaced with an entity reference.

<query>
  <filter>
    <sql>
      <select>
        <![CDATA[
          select cityobject_id from cityobject_genericattrib
            where attrname='energy_level' and realval < 12
        ]]>
      </select>
    </sql>
  </filter>
</query>
<sortBy> sorting clause

The <sortBy> parameter is used to specify a list of property names whose values should be used to order the set of city objects that satisfy the query. If no sorting clause is provided, the city objects are exported in an arbitrary order.

The value of the <sortBy> parameter is a list of one or more <sortProperty> elements, each of which must define a <valueReference> pointing to the property that shall be used for sorting. Only simple thematic attributes of the requested top-level feature type or one of its nested feature types are supported. If you specify multiple <sortProperty> elements, the result set is sorted by the first property in the list and that sorted result is sorted by the second property, and so on.

For each <sortProperty>, the sort order can be defined using the <sortOrder> parameter. The value asc indicates an ascending sort (default) and desc indicates a descending sort.

The following example illustrates how to sort all buildings according to their measured height in descending order.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <sortBy>
    <sortProperty>
      <valueReference>bldg:measuredHeight</valueReference>
      <sortOrder>desc</sortOrder>
    </sortProperty>
  </sortBy>
</query>
<limit> parameter

The <limit> parameter limits the number of explicitly requested top-level city objects in the export dataset. It offers the elements <count> and <startIndex> that can be used together or individually.

The <count> parameter indicates the total number of city objects that shall be exported from the set of city objects satisfying the query. And <startIndex> lets you define the index within this result set from which the export shall begin.

Note

The <startIndex> uses zero-based numbering. Thus, the first city object is assigned the index 0, rather than the index 1. The default value of <startIndex> is 0.

The query below shows how to export at maximum 10 buildings from the database, even if more buildings satisfy the query.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <limit>
    <count>10</count>
  </limit>
</query>

The following query exports the next 10 buildings by starting with the 11th building in the result set. If the result set contains less buildings, the export dataset will, of course, also contain less buildings.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <limit>
    <count>10</count>
    <startIndex>10</startIndex>
  </limit>
</query>
<lods> parameter

The <lods> parameter lists the level of details (LoD) that shall be exported for the requested feature types.

The LoDs to be exported are given as list of one or more <lod> elements having an integer value between 0 and 4. The optional mode attribute specifies whether a feature must have a spatial representation in all of the enumerated LoDs to be exported (mode = and), or whether it is enough that the feature has a spatial representation in at least one LoD from the list (mode = or) (default). The modes minimum and maximum are special cases of the or mode, for which only the lowest respectively highest of the matching LoDs is exported. If a feature has additional spatial representations in LoDs that are not listed, then these representations are not exported. If a feature does not satisfy the LoD filter condition at all, then it is skipped from the export.

Many feature types in CityGML can have nested sub-features. In such cases, the top-level feature itself is not required to have a spatial property, but the geometry can be modelled for its nested sub-features. For example, a bldg:Building feature does not need to provide an LoD 2 geometry through its own bldg:lod2Solid or bldg:lod2MultiSurface properties. Instead, it can have a list of nested boundary surfaces such as bldg:WallSurface and bldg:RoofSurface features that have own LoD 2 representations. Nevertheless, in this case the bldg:Building is considered to be represented in LoD 2.

To handle these cases, the <lods> parameter offers the optional searchMode attribute. When set to all, then all nested features are recursively scanned for having a spatial representation in the provided list of LoDs. If an LoD representation is found for any (transitive) sub-feature, then the top-level feature is considered to satisfy the filter condition. The all mode is, however, expensive because it requires many joins and sub-queries on the database level. When setting searchMode to depth instead, you can use the additional searchDepth attribute to specify the maximum depth to which nested sub-features are searched for LoD representations.

For example, the following bldg:Building feature has a nested bldg:BuildingInstallation sub-feature and a nested bldg:WallSurface sub-feature. Moreover, the bldg:BuildingInstallation itself has a nested bldg:RoofSurface sub-feature.

<bldg:Building><bldg:outerBuildingInstallation>
    <bldg:BuildingInstallation>
      <bldg:boundedBy>
        <bldg:RoofSurface></bldg:RoofSurface>
      </bldg:boundedBy>
    </bldg:BuildingInstallation>
  </bldg:outerBuildingInstallation><bldg:boundedBy>
    <bldg:WallSurface></bldg:WallSurface>
  </bldg:boundedBy></bldg:Building>

When setting searchDepth to 1 in this example, then not only the bldg:Building but also its nested bldg:BuildingInstallation and bldg:WallSurface are searched for a matching LoD representation, but not the bldg:RoofSurface of the bldg:BuildingInstallation. This roof surface is on the nesting depth 2 when counted from the bldg:Building. Thus, searchDepth would have to be set to 2 to also consider this bldg:RoofSurface feature.

Per default, searchMode is set to depth with a searchDepth of 1.

The following listing exemplifies the use of the <lods> parameter. In this example, all tunnels shall be exported that have either an LoD 2 or LoD 3 representation. LoD representations are also searched on sub-features up to a nesting depth of 2.

<query>
  <typeNames>
    <typeName>tun:Tunnel</typeName>
  </typeNames>
  <lods mode="or" searchMode="depth" searchDepth="2">
    <lod>2</lod>
    <lod>3</lod>
  </lods>
</query>
<appearance> parameter

The <appearance> parameter filters appearances by their theme. To keep an appearance in the target dataset, the value of its app:theme attribute simply has to be enumerated using a <theme> subelement. The string values must match exactly.

The app:theme attribute in CityGML is optional and thus can be null. To be able to also express whether appearances having a null theme should be exported, the <appearance> parameter offers another subelement <nullTheme>, which is of type Boolean. If set to true, appearances with a null theme are exported, otherwise not (default).

The following query exports road features and appearances with theme summer and winter. Since <nullTheme> is set to false, appearances lacking an app:theme attribute are not exported.

<query>
  <typeNames>
    <typeName>tran:Road</typeName>
  </typeNames>
  <appearance>
    <nullTheme>false</nullTheme>
    <theme>summer</theme>
    <theme>winter</theme>
  </appearance>
</query>
<tiling> parameter

The <tiling> parameter allows for exporting the requested top-level features in tiles. Every tile is exported to its own target file within a separate subfolder of the export directory.

Like the bounding box settings of the simple GUI-based export filter (cf. chapter Section 4.5.6), the <tiling> parameter requires three mandatory inputs: the <extent> of the geographic region that should be tiled and the number of <rows> and <columns> into which the region should be evenly split. The <extent> must be provided as bounding box using a <lowerCorner> and an <upperCorner> element.

The example below exports all buildings within the provided <extent> into 2x2 tiles.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <tiling>
    <extent srid="4326">
      <lowerCorner>10.7005978 47.5707931</lowerCorner>
      <upperCorner>10.7093525 47.5767573</upperCorner>
    </extent>
    <rows>2</rows>
    <columns>2</columns>
  </tiling>
</query>

Besides the mandatory input, the optional <tilingOptions> parameter can be used to control the names of the subfolders and tile files, and whether tile information should be stored as generic attribute. The following subelements are supported:

Tiling options of the <tiling> parameter.
<tilePath>
Name of subfolder that is created for each tile (default: tile).
<tilePathSuffix>
Suffix to append to each <tilePath>. Allowed values are row_column (default), xMin_yMin, xMax_yMin, xMin_yMax, xMax_yMax and xMin_yMin_xMax_yMax.
<tileNameSuffix>
Suffix to append to each tile filename. Allowed values are none (default) and sameAsPath.
<includeTileAsGenericAttribute>
Add a generic attribute named TILE to each city object.
<genericAttributeValue>
Value for the generic attribute. Allowed values are identical to those for <tilePathSuffix> (default: xMin_yMin_xMax_yMax).

If the <tilingOptions> element is not present, then the settings defined in the export preferences (cf. Section 4.5.9.2) are used instead.

targetSrid attribute

The <query> element offers an optional targetSrid attribute. If targetSrid is provided, all exported geometries will be transformed into the target coordinate reference system. The targetSrid attribute must reference an SRID available in the underlying database. The transformation is performed using corresponding database functions.

<query targetSrid="25832"></query>
Address information and metadata

The 3DCityDB comes with a CityGML extension called 3DCityDB ADE that simplifies using address properties and metadata columns in XML queries. The following table shows the recommended XML prefix, namespaces and the XSD schema locations of the 3DCityDB ADE for both CityGML 2.0 and 1.0.

XML prefix, namespace and schema location of the 3DCityDB ADE.
XML prefix
citydb
XML namespace
XSD schema location

Address information

CityGML uses the OASIS xAL 2.0 standard for the representation of address information. xAL is very flexible in that it supports various address styles that can be XML-encoded in many ways. As a drawback, this flexibility makes it difficult to define a filter on address elements (e.g., the street or the city) using an XPath expression based on xAL. When importing address information into the 3DCityDB, the xAL address fragment is parsed and mapped onto the columns STREET, HOUSE_NUMBER, PO_BOX, ZIP_CODE, CITY, STATE and COUNTRY of the ADDRESS table. Thus, it is much simpler to express filter criteria on these columns.

For this reason, the 3DCityDB ADE injects additional properties into the core:Address feature of CityGML that correspond to the columns of the ADDRESS table. By this means, these properties can be used in filter expressions. The mapping between ADE properties and columns of the ADDRESS table is shown below. Note that the citydb prefix must be associated with the ADE XML namespace (see above). If omitted, the CityGML 2.0 namespace of the 3DCityDB ADE is assumed given that the prefix citydb is used.

3DCityDB ADE properties for accessing address information.
ADE property
(injected into core:Address)
Data type
Column of the ADDRESS table
citydb:street
xs:string
STREET
citydb:houseNumber
xs:string
HOUSE_NUMBER
citydb:poBox
xs:string
PO_BOX
citydb:zipCode
xs:string
ZIP_CODE
citydb:city
xs:string
CITY
citydb:state
xs:string
STATE
citydb:country
xs:string
COUNTRY

The following example illustrates how to query all buildings along the street Unter den Linden. It uses the citydb:street ADE property as value reference in the filter expression.

<query>
  <typeNames>
    <typeName>bldg:Building</typeName>
  </typeNames>
  <filter>
    <propertyIsLike wildCard="*" singleCharacter="." escapeCharacter="\" matchCase="true">
      <valueReference>bldg:address/core:Address/citydb:street</valueReference>
      <literal>Unter den Linden*</literal>
    </propertyIsLike>
  </filter>
</query>

Note

In the output file, the address information is always encoded using xAL and not using the 3DCityDB ADE properties to ensure that regular CityGML applications without ADE support can still parse the address.

3DCityDB metadata of city objects

The 3DCityDB stores database-specific metadata with every city object using the columns LAST_MODIFICATION_DATE, UPDATING_PERSON, REASON_FOR_UPDATE and LINEAGE of the CITYOBJECT table. These metadata properties are only defined for the 3DCityDB but not in CityGML. To make them available in filter expressions, the 3DCityDB ADE therefore injects them into the CityGML core:_CityObject feature. This way, you can filter city objects based values stored in the metadata columns of the 3DCityDB.

3DCityDB ADE properties for accessing database-specific metadata information.
ADE property
(injected into core:_CityObject)
Data type
Column of the CITYOBJECT table
citydb:lastModificationDate
xs:string
LAST_MODIFICATION_DATE
citydb:updatingPerson
xs:string
UPDATING_PERSON
citydb:reasonForUpdate
xs:string
REASON_FOR_UPDATE
citydb:lineage
xs:string
LINEAGE

The following example shows how to use the ADE metadata properties in filter expressions. The query fetches all bridges that have been modified in the database after 2018-01-01.

<query>
  <typeNames>
    <typeName>brid:Bridge</typeName>
  </typeNames>
  <filter>
    <propertyIsGreaterThan>
      <valueReference>citydb:lastModificationDate</valueReference>
      <literal>2018-01-01</literal>
    </propertyIsGreaterThan>
  </filter>
</query>

A query expression is an action that directs the export operation to search the 3DCityDB for city objects that satisfy some filter expression encoded within the query. Query expressions are given in XML using a <query> root element. The XML language used is specific to the Importer/Exporter and the 3DCityDB but draws many concepts from OGC standards such as Filter Encoding (FE) 2.0 and Web Feature Service (WFS) 2.0.

Note

All XML elements of the query language are defined in the XML namespace http://www.3dcitydb.org/importer-exporter/config. Simply define this namespace as default namespace on your <query> root element.

A query expression may contain a typeNames parameter, a projection clause, a selection clause, a sorting clause, a counter filter, an LoD filter, an appearance filter, tiling options and a targetSrid attribute for coordinate transformations.

Elements of an XML query expression.
Element
Description
Lists the name of one or more feature types to query (optional).
Projection clause that identifies a subset of optional feature properties that shall be kept or removed in the target dataset (optional).
Selection clause that specifies criteria that conditionally select city objects from the 3DCityDB (optional).
Sorting clause to specify how city objects shall be ordered in the target dataset (optional).
Limits the number of requested city objects that are exported to the target dataset (optional).
Limits the LoDs of the exported city objects to a given subset (optional).
Limits the appearances of the exported city objects to a given subset (optional).
Defines a tiling scheme for the export (optional).
Defines a coordinate transformation (optional).

In addition, the following chapter discuss how to use address information and 3DCityDB metadata in query expressions:

Caution

XML queries are based on the CityGML XML schemas. For instance, feature and property names as well as property values and multiplicities follow the CityGML XML schema definitions. Likewise, value references in filter conditions must be given as XPath expressions based on the XML schemas. The alternative CityJSON encoding is not supported.

Preferences

The export operation can be customized with various different preference settings that are presented in this chapter.

General

This preferences dialog lets you define general settings affecting the export operation.

_images/impexp_export_preferences_general_fig.png

Export preferences – General options.

General options

In the first part of the dialog [1], you can choose the CityGML version that shall be used for exports. The default value is CityGML 2.0, which is the current version of the OGC CityGML Encoding Standard. In addition, also the previous version 1.0 is still supported.

When exporting data in CityGML format, this option obviously determines whether the data is encoded using the CityGML XML schemas of version 2.0 or 1.0. But since CityGML is not just an encoding but also a conceptual data model, this option has other impacts as well. Most importantly, feature types such as bridges and tunnels are not available in CityGML 1.0. When choosing CityGML 1.0 on this preferences dialog, these feature types cannot be selected in the corresponding feature type filter (see Section 4.5.7) and will also not be exported from the database. Thus, even if you choose CityJSON as encoding format for the export operation and your database contains bridges or tunnels, they will not be written to the output file if the CityGML version is set to 1.0 here.

The Cancel export immediately in case of errors option [1] lets you define how the export operation should deal with error situations during database exports. By default, the process fails fast on errors and thus aborts immediately. Disable this option in case you rather want the export operation to continue on errors and complete the export process if possible. The errors encountered during the export process are always recorded in the log in both cases.

By default, the export operation first computes the number top-level objects [1] matching the provided filters on the Export tab before starting the export process. This number is printed to the log and also used to render a progress bar for the export operation. However, computing this number can take a long time on large databases. Thus, this option can be disabled so that the export process starts immediately instead.

As described in Section 4.5, the export operation supports using compressed output formats like GZIP and ZIP, which helps to keep file sizes small. You can choose whether CityGML (default) or CityJSON shall be used as data encoding for compressed formats using the drop-down list offered by this preferences dialog [1].

Bounding box options

For every city object in the database, a bounding box can be stored in the ENVELOPE column of the CITYOBJECT table. However, when exporting data, only the bounding boxes of top-level objects [2] are written to the dataset by default. You can change this default behaviour and choose to export bounding boxes for all objects or even to not export object bounding boxes at all instead.

In addition, a bounding box embracing all top-level features in the output file can be written to the dataset. Simply enable the corresponding option shown in Fig. 4.40. The bounding box will be calculated anew for every export process. This might take some time depending on the number of features to be exported.

For tiled exports, also the tile extent can be used as bounding box for the entire dataset instead. The tile extent follows from the specified bounding box filter and the number of rows and columns (see Section 4.5.6) and thus does not need to be calculated. However, note that the tile extent might be larger or even smaller than the actual extent calculated from the features.

Tiling options

The Importer/Exporter allows for applying a spatial bounding box filter to exports on the Export tab of the operations window. To trigger a tiled export, a user can additionally provide the number of rows of columns into which the bounding box shall be evenly split (cf. Section 4.5.6).

When tiling is enabled in this way, the export operation iterates over all tiles within the bounding box and exports the city objects on each tile. Every tile is exported to its own file within a separate subfolder. With the Tiling options settings, the names of the subfolders and tile files can be adapted as shown below.

_images/impexp_export_preferences_tiling_options_fig.png

Export preferences – Tiling options.

The user must always specify just a single output file on the Export tab even if tiling shall be used (Section 4.5). For example, assume a user has entered the following output file:

/home/user/my_3d_model/my_city.gml

For a tiled export, this information is split into two components:

  • Export directory: /home/user/my_3d_model/
  • Filename: my_city.gml

The Importer/Exporter will create one subfolder for each tile inside the export directory. By default, the folder name is tile_x_y, with x and y being the row and column of the current tile. Both the tile subdirectory (default: tile) and the subdirectory suffix (default: row / column) can be adapted using the corresponding input fields. When a combination of the tile’s lower and upper corner is chosen instead of the row and column numbers, the coordinates will be given in the reference system specified for the export (cf. Section 4.5; default value is the internal SRS of the 3D City Database instance), even if the coordinates of the bounding box filter are given in another user-defined SRS. This makes it easy to relate city objects to tiles since the coordinates of the city objects contained in the tile are exported in the same reference system.

The filename of each tile file is chosen to be identical with the filename of the output file on the Export tab by default (my_city.gml in the above example). However, you may also decide to append a tile-specific filename suffix.

Example: Based on the example output file from above and assuming default settings, the export would create the following folder structure:

/home/user/my_3d_model/
├── tile_0_0/
|   └── my_city.gml
├── tile_1_0/
|   └── my_city.gml
├── tile_2_0/
|   └── my_city.gml
...

For further traceability, it is possible to attach a generic string attribute of name TILE to each exported top-level feature, indicating which tile it belongs to. The options for the value of the generic attribute are the same as for the suffix of the tile subfolder.

CityObjectGroup

When exporting city object groups, also group members are written to the target dataset (cf. Section 4.5.7). Group members are always exported by reference. For CityGML, this means that the grp:member property of a CityObjectGroup does not contain the member inline but uses an xlink:href reference to point to the member. In CityJSON, the "members" property of a CityObjectGroup is per definition just an array of the identifiers of the group members. In both cases, the group members themselves are therefore exported as separate top-level features in the same dataset.

_images/impexp_export_preferences_cityobjectgroup_fig.png

Export preferences – CityObjectGroup.

By default, group members are only exported if they satisfy the export filter settings. This behavior can be changed using this preference dialog. When checking the option Export all group members as references only, then a reference is created for each group member of a CityObjectGroup defined in the database, no matter whether this group member is also exported or skipped due to filter settings. Thus, the consistency of the references is not checked, and some references might not be resolvable in the final dataset. The benefit of skipping this check is that the overall performance of the export is increased.

Appearance

The appearance export preferences control how appearance information of city objects is written to the output datasets.

_images/impexp_export_preferences_appearance_fig.png

Export preferences – Appearance.

By default, both appearance information and texture image files associated with the city objects in the 3D City Database are exported [1]. Alternatively, the user can choose to only export the appearance information without textures or to skip appearances completely.

When exporting texture files, the additional options Overwrite existing texture files and Generate unique texture filenames influence the way in which texture files are written to the file system [1].

  1. Overwrite existing texture files: Texture files are stored in a separate folder of the file system. Before exporting a texture image file into this folder, the Importer/Exporter can check whether a file of the same filename already exists in this folder. In this case, the existing file will be kept if this option is not enabled. Otherwise, and by default, there is no check and a texture file of the same name will be overwritten.
  2. Generate unique texture filenames: Often filenames for texture images are automatically created from a naming scheme involving some counter (e.g., a prefix “tex” followed by a number incremented by 1 for each new image). It thus can happen that two city objects within the same or different instance documents are assigned a texture image file of the same name but with different content. In the 3D City Database, texture images are stored in separate records and thus duplicate filenames are not an issue. When exporting the data, however, two texture files of the same name might be written to the same target folder, in which case one is replaced with the other. This will obviously lead to false visualizations and issues in workflows consuming the exported data. For this reason, checking this option will force the export process to generate unique and stable filenames for each texture file.

The location where to store the texture files can be defined by the user [2]. The default option is to pick a folder below the export directory and thus relative to the output file. The default folder name is “appearance”. Instead of a local path, also an absolute path can be provided. In this case, the same folder will be used in subsequent exports from the 3D City Database.

Note

When the data is exported into a ZIP archive, texture files are also stored inside the archive if a local path is chosen.

When appearances are chosen to be exported but the Do not store texture files option [1] is checked, then appearance information is generated for the city objects in the output file, but the texture files are not stored in the file system. However, since the texture path is part of the appearance information, the directory settings [2] and whether to generate unique texture filenames [1] still has an impact on the generated appearance information. The Do not store texture files option is useful, for example, if the texture files have already been exported to an absolute directory in a previous run of the export operation.

Especially when running the Importer/Exporter on a Windows machine, placing a large number of files into the same folder might lead to severe I/O lags. This might negatively affect the performance for large exports. For this reason, the Importer/Exporter can automatically distribute the texture files over additional subfolders that are automatically created. Simply check the option Automatically place texture files in additional subfolders and provide the number of subfolders to be used.

Geometry

The Importer/Exporter can apply an affine coordinate transformation to all geometry objects that are exported from the database. This option is disabled by default.

_images/impexp_export_preferences_geometry_fig.png

Export preferences – Geometry.

You define the affine transformation by entering the coefficients of the transformation matrix in this preferences dialog (cf. Fig. 4.44). Please refer to Section 4.4.6.5 for a general discussion on affine transformations and the meaning of the coefficients.

The export operation applies the affine transformation to all coordinates of all city objects. This also includes all matrices in the data like the 2x2 matrices of GeoreferencedTextures (CityGML only), the 3x4 transformation matrices of TexCoordGen elements (CityGML only) used for texture mapping and the 4x4 transformation matrices for ImplicitGeometries.

Example: For an ordinary translation of all city objects by 100 meters along the x-axis and 50 meters along the y-axis (assuming all coordinate units are given in meters), you have to enter the coefficients of the following transformation matrix:

\[\begin{split}{\overrightarrow{p}}^{'} = \begin{bmatrix} 1 & 0 & 0 & 100 \\ 0 & 1 & 0 & 50 \\ 0 & 0 & 1 & 0 \\ \end{bmatrix} \bullet \overrightarrow{p}\end{split}\]
CityGML general

Note

These preference settings only apply to CityGML exports.

The general CityGML export options allow you to enable pretty printing for the output file.

_images/impexp_export_preferences_citygml_general_fig.png

Export preferences – General CityGML options.

When enabled (default), indentation is used for the XML document and every XML element is put on a new line. Without pretty printing, the entire data is put on a single long line. The file size will be considerably smaller without pretty printing, but the content is difficult to visually comprehend. Pretty printing should therefore be disabled in case the file is mainly used in machine-to-machine communication.

Address

Note

These preference settings only apply to CityGML exports.

Similar to the import of xAL address information (see Section 4.4.6.6), a user can choose how address information should be exported to a target CityGML dataset. The available options of the address export preferences are shown in the figure below.

_images/impexp_export_preferences_address_fig.png

Export preferences – Address.

When importing address data, the xAL fragment is parsed and the main address elements are stored in the columns of the ADDRESS table. As discussed in Section 4.4.6.6, the original <xal:AddressDetails> element can additionally be imported “as is” into the XAL_SOURCE column. Thus, there are two possible ways to reconstruct the address information.

  1. The default option is to build the xAL address from the columns of the ADDRESS table without considering the XAL_SOURCE column. In this case, the XML encoding of the xAL address uses the following template. The column names of the ADDRESS table are printed in capital letters and are replaced with the values from the database at export time.

    <Address>
      <xalAddress>
        <xAL:AddressDetails>
          <xAL:Country>
            <xAL:CountryName>COUNTRY</xAL:CountryName>
            <xAL:Locality Type="City">
              <xAL:LocalityName>CITY</xAL:LocalityName>
              <xal:PostBox>
                <xal:PostBoxNumber>PO_BOX</xal:PostBoxNumber>
              </xal:PostBox>
              <xAL:Thoroughfare Type="Street">
                <xAL:ThoroughfareNumber>HOUSE_NUMBER</xAL:ThoroughfareNumber>
                <xAL:ThoroughfareName>STREET</xAL:ThoroughfareName>
              </xAL:Thoroughfare>
              <xAL:PostalCode>
                <xAL:PostalCodeNumber>ZIP_CODE</xAL:PostalCodeNumber>
              </xAL:PostalCode>
            </xAL:Locality>
          </xAL:Country>
        </xAL:AddressDetails>
      </xalAddress>
    </Address>
    
  2. Optionally, the xAL fragment is taken “as is” from the XAL_SOURCE column and inserted literally into the target CityGML document. This way there will be no loss of information and the address encoding will be identical to the original source datasets. Obviously, this option requires that the XAL_SOURCE column has been populated during import (cf. Section 4.4.6.6).

Both options are mutually exclusive, but if the chosen option does not provide results, the other option can be used as fallback.

XSL Transformation

Note

These preference settings only apply to CityGML exports.

Like with CityGML imports, you can apply XSL transformations during the export process to change the resulting CityGML output data. Simply check the Apply XSLT stylesheets option and point to an XSLT stylesheet in your local file system using the Browse button. The stylesheet will be automatically considered by the export process to transform the CityGML data before it is written to a file.

_images/impexp_export_preferences_xsl_fig.png

Export preferences – XSL transformation.

By clicking the + and - buttons, more than one XSLT stylesheet can be provided to the exporter. The stylesheets are applied in the given order, with the output of a stylesheet being the input for its direct successor. The Importer/Exporter is shipped with example XSLT stylesheets in the folder templates/XSLTransformations of the installation directory.

Note

  • To be able to handle arbitrarily large exports, the export process reads single top-level features from the database, which are then written to the target file. Thus, each XSLT stylesheet will just work on individual top-level features but not on the entire file.
  • The output of each XSLT stylesheet must again be a valid CityGML structure.
  • Only stylesheets written in the XSLT language version 1.0 are supported.
CityJSON options

Note

These preference settings only apply to CityJSON exports.

_images/impexp_export_preferences_cityjson_fig.png

Export preferences – CityJSON options.

General options

For CityJSON exports, the preference settings shown in Fig. 4.49 allow the user to enable pretty printing for the output file (default: disabled). When enabled, indentation is used for the JSON document and JSON elements are put on a new line. Without pretty printing, the entire data is put on a single long line. The file size will be considerably smaller without pretty printing, but the content is difficult to visually comprehend.

To reduce the file size, CityJSON also offers a way to compress the geometry by representing the coordinates of vertices with integer values rather than doubles (also called quantization). The scale factor and the translation needed to obtain the original coordinates are also stored in the file. Simply enable the Use geometry compression option to apply this compression method (enabled by default).

In CityGML, geometries can be reused using XLink references in order to avoid duplicate geometries. For example, assume a building with a dormer. The bldg:Building object in CityGML could be modelled such that its geometry (gml:Solid or gml:MultiSurface) also contains the shape of the dormer. In addition, the dormer itself can be modelled as separate bldg:BuildingInstallation child object of the building with its own geometry, which would reference those polygons from the building’s geometry that describe the shape of the dormer.

CityJSON does not support reusing or referencing geometries. When exporting the above example as CityJSON, the "BuildingInstallation" would therefore also have its own geometry, but since the polygons from the "Building" object cannot be referenced, this geometry would be duplicate. To avoid issues with duplicate geometries in applications consuming the data, you can activate the Remove duplicate geometries of child objects option.

Note

If the child object does not have any remaining geometry after removing the duplicate polygons, the entire child object will be removed as well.

Another general option available for CityJSON exports is to add a special attribute of name "sequenceId" to every top-level city object when sorting is enabled for the export (see Section 4.5.8.4). In a CityJSON file, all city objects are stored as "CityObjects" collection. This collection, however, is an unordered set and thus cannot be guaranteed to reflect the result of the sorting operation. The "sequenceId" therefore adds this information by storing the index of the city object in the sorted result set.

Note

The "sequenceId" uses zero-based numbering. Thus, the first top-level object according to the sorting criteria is assigned the index 0, rather than the index 1.

Precision

For both the coordinates of vertices and texture coordinates, you can define the number of decimal places that should be kept in the output file. If the number of decimal places is greater in the database, the coordinates values are rounded accordingly during export. This helps to reduce the size of the resulting output file. The default behaviour is to keep 3 decimal places for vertex coordinates and 7 decimal places for texture coordinates.

Caution

A side-effect of rounding coordinate values is that two very close but distinct vertices in the database might receive the same coordinate values in the CityJSON file. If these two vertices are, for instance, consecutive points on the ring of a polygon, the Importer/Exporter will not remove one or the other from the ring. Thus, the exported polygon will contain duplicate consecutive points.

Resources
_images/impexp_export_preferences_resources_fig.png

Export preferences – Resources.

Multithreaded processing

The export process is implemented based on multithreaded data processing in order to increase the overall export performance. The Resource preferences allow for setting the minimum and maximum number of concurrent threads to be used in the export process [1]. Make sure to enter reasonable values depending on your hardware configuration. By default, the maximum number is set to the number of available CPUs/cores times two.

Caution

A higher number of threads does not necessarily result in a better performance. In contrast, a too high number of active threads faces disadvantages such as thread life-cycle overhead and resource thrashing. Also note that each thread requires its own physical connection to the database. Therefore, your database must be ready to handle enough parallel physical connections. Ask you database administrator for assistance.

Batch processing

In order to optimize database response times, multiple features satisfying the export filters are requested from a feature table with a single SELECT statement rather than using a separate request for each feature (batch processing). This allows for reducing the number of overall SELECT statements that are sent to the database to fetch the data. Especially in case the Importer/Exporter and the database server are not running in the same local network and, thus, the network latency is high, batch processing can help to reduce the export time. Batch processing is also applied in the same way for fetching geometries and BLOB data such as texture images. The preferences dialog allows for setting the number of objects to be fetched with one request for all three cases [2] (default: 30).

Note

All objects requested within one batch from the database are processed in main memory. Thus, the Importer/Exporter might run out of memory if the batch size is too high (see Section 4.1 for how to increase the available main memory).

Object identifier cache

To be able to reconstruct XLink references for CityGML exports (cf. Section 4.5.9.8), the export process needs to keep track of the identifiers (gml:id values) of exported features and geometry objects. For fast access, the identifiers are kept in main memory and are only drained to temporary tables in case the predefined cache size limit is reached.

You can define the maximum number of identifiers to be held in main memory as well as the page factor and number of parallel table partitions used for the feature and geometry caches [3]. The meaning of the values is identical to the Resource preferences for the import operation. So, please refer to Section 4.4.6.12 for more details.

Note

By default, the temporary tables for draining the caches are created in the same 3D City Database instance. You can also choose to use a local cache instead (see Section 4.7.3 for more details). However, note that some temporary information must be stored in the database even if you use a local cache to be able to perform JOINs between temporary tables and tables of the 3DCityDB schema.

3D city model content stored in a 3D City Database can be exported as CityGML and CityJSON datasets on the Export tab of the operations window.

_images/impexp_CityGML_export_dialog_fig.png

The export dialog.

Output file selection

At the top of the export dialog, the folder and filename of the target dataset must be specified [1]. You can either enter the output file manually or open a file selection dialog via the Browse button. The export operation supports the output file formats listed below. Simply make sure the output file ends with the file extension of the format you want to export.

Supported output file formats and extensions
Format
File extensions
CityGML (version 2.0 or 1.0)
*.gml, *.xml
CityJSON (version 1.0)
*.json, *.cityjson
GZIP compressed file
*.gz, *.gzip
ZIP archive
*.zip

For compressed formats, you can specify whether the data should be encoded in CityGML or CityJSON using the General preference settings (see Section 4.5.9.1). When choosing ZIP as target format, additional export files such as texture images are also written to the ZIP archive by default.

Coordinate transformation

In general, coordinate values of geometry objects are associated with the coordinate reference system defined for the 3D City Database instance during setup and are exported “as is” from the database. The export operation allows a user to apply a coordinate transformation to another reference system during export. The target coordinate reference system is chosen from the corresponding drop-down list [2]. This list can be augmented with user-defined reference systems (cf. Section 4.7.2 for more details). When picking the entry “Same as in database”, no coordinate transformation is applied (default behavior).

Export filters

Similar to the import process, the export operation offers thematic and spatial filters to restrict an export to a subset of the 3D city model content stored in the database [3]. The following filters are available and discussed in separate sections of this chapter:

To enable a filter, simply select its checkbox. This will automatically make the filter dialog visible. Make sure to provide the mandatory input for the filter to work correctly. If more than one filter is enabled, the filters are combined in a logical AND operation, i.e. all filter criteria must be fulfilled for a city object to be exported. If no checkbox is enabled, no filters are applied and, thus, all features contained in the database will be exported.

Note

All export filters are only applied to top-level features but not to nested sub-features.

Advanced XML-based queries

Besides the simple export filters discussed above, the export can also be controlled through a more advanced query expression that offers logical operators (AND, OR, NOT) to combine thematic and spatial filters to complex conditions. Moreover, it allows for defining projections on the properties of the exported city objects and provides a filter for different appearance themes. All simple filters from above are, of course, also available for query expressions.

Query expressions are encoded in XML using a <query> root element. The query language has been developed for the purpose of the 3D City Database but is strongly inspired by and very similar to the OGC Filter Encoding 2.0 and the query expressions used by the OGC Web Feature Service 2.0.

To provide an XML-based query expression, click on XML query [5] at the bottom right of the export dialog (cf. Fig. 5.8). The simple filter dialogs will be replaced with an XML input field like shown below.

_images/impexp_XML_query_dialog_fig.png

Input field to enter an XML-based query expression.

The XML query is entered in [1]. This requires knowledge about the structure and the allowed elements of the query language. A documentation of the query language is provided in Section 4.5.8.

The new query button new_query_icon on the right side of the input field [2] can be used to create an empty query expression that contains all allowed elements of the query language. The copy query button copy_query_icon translates all settings made for the simple filters to an XML query representation. The results of both actions can therefore be used as starting point for defining your own query expression. The validate query button validate_query_icon performs a validation of the query entered in [1] and prints the validation report to the console window. Only valid query expressions are accepted by the export operation. Finally, the generate SQL button query_to_sql_icon translates the XML query to a SQL SELECT statement that you can directly copy and use with a database client. Note that you must first connect to a 3DCityDB instance before you can create the SQL expression.

The Simpe filter button [3] takes you back to the simple filter dialog.

Note

You can also use an external XML editor to write query expressions. External editors might be more comfortable to use and often offer additional tools like auto completion. The XML Schema definition of the query language (required for validation and auto completion) can be exported via “File -> Save Settings XSD As…” on the main menu of the Importer/Exporter (cf. Section 4.2). Make sure to use a <query> element as root element of the query expression in your external XML editor.

Tiled exports

When exporting 3D city model content to a single output file, the file size may quickly grow. For CityGML files, the Importer/Exporter supports writing files of arbitrary size (only limited by the file system of the operating system). However, such files might become too large to be processed by other applications. Tiled exports are useful in this case because the export is split into a regular grid of tiles each of which is written to its own output file. Every tile only contains a subset of all city objects to be exported and, thus, the file sizes will be smaller compared to a single output file. To enable tiling, you must use a bounding filter (see Section 4.5.6) or a corresponding XML query expression.

You still have to specify just a single output file in [1] for tiled exports. Each tile is automatically stored in its own subfolder named tile_x_y with x and y being the row and column of the tile in the grid. The names of both the subfolders and the tile files can be augmented with tile-specific suffixes (see the tiling options in chapter Section 4.5.9.2).

Export preferences

More fine-grained preference settings affecting the export operation are available on the Preferences tab of the operations window [6]. Make sure to check these settings before starting the export process. A full documentation of the export preferences is provided in Section 4.5.9. The following table provides a summary overview.

Summery overview of the export preferences
Preference name
Description
General options like the CityGML version to be used for exports.
Settings for tiled exports. Requires that tiling is enabled on the bounding box filter.
Defines whether group members are exported by value or by reference.
Defines whether appearance information should be exported.
Allows applying an affine transformation to all output geometries.
(CityGML only)
General options affecting the CityGML export.
(CityGML only)
Controls the way in which xAL address fragments are exported from the database.
(CityGML only)
Controls whether referenced features or geometry objects are exported using XLinks or as deep copies.
(CityGML only)
Defines one or more XSLT stylesheets that shall be applied to the exported city objects in the given order before writing them to file.
(CityJSON only)
General options affecting the CityJSON export.
Allocation of computer resources used in the export operation.

Starting the export process

Once all export settings are correct, the Export button [4] starts the export process (cf. Fig. 5.8). If a database connection has not been established manually beforehand, the currently selected entry on the Database tab is used to connect to the 3D City Database. The separate steps of the export process as well as all errors and warnings that might occur during the export are reported to the console window. The overall progress is shown in a separate status window. This status window also offers a Cancel button to abort the export process at any time.

Caution

While CityGML files can be written in a streaming fashion (i.e., one top-level feature after the other), which allows for keeping the memory footprint low, large parts of a CityJSON file must be kept in main memory before the entire file can be written to the file system. In order to avoid memory issues when exporting to CityJSON, it is therefore strongly recommended to reduce the number of city objects to be exported to the CityJSON output file. This can be achieved, for instance, by applying export filters or by using a tiled export (recommended).

Visualization export

Feature version filter

The 3D City Database supports storing multiple versions of the same feature to enable object histories. The object history is captured based on timestamps values for the creation and termination date of a feature version, which are stored in the CREATION_DATE and TERMINATION_DATE columns of the CITYOBJECT table.

Using the feature version filter, a user can choose which version of the top-level features should be selected in a visualization export operation.

_images/impexp_export_vis_feature_version_filter.png

Feature version filter for visualization export operations.

The different feature version options available from the drop-down list are described below.

Overview of the different feature version options
Feature Version
Description
Latest version
Selects top-level features that are not marked as terminated in the database and, thus, whose TERMINATION_DATE attribute is null.
Valid version
Selects top-level features that were valid at a given timestamp or for a given time range. The filter is evaluated against the CREATION_DATE and TERMINATION_DATE attributes.
Terminated version
Selects only terminated top-level features. You can choose to either select all terminated features or only those that were terminated at a given timestamp. The filter is evaluated against the TERMINATION_DATE attribute.

For example, you can use Valid version to create a visualization of a past status of your 3D city model (e.g., at March 1st, 2018) and compare it to a visualization of the current version.

Note

For the feature version filter to work correctly, you must make sure that the validity times of subsequent feature versions do not overlap. The Importer/Exporter does not provide specific tools for managing feature versions in the database.

Hint

If your 3D City Database does not contain multiple feature versions, you should always disable the feature version filter to avoid unnecessarily complex SQL queries.

Tiling filter

As explained in Section 4.6, tiling is an important mechanism to split the visualization export into multiple tiles with smaller file size compared to a single output file. This allows a viewer to employ streaming techniques, for instance, to only load and display tiles that are visible from the current camera position, and to unload tiles when moving the camera. This helps to keep the memory footprint low and the visualization performance high, especially when using a web client.

The tiling filter lets you define the tiling pattern to be used in the visualization export.

_images/impexp_export_vis_tiling_filter.png

Tiling filter for visualization export operations.

When tiling is enabled, the bounding box of the entire export area is divided into a regular grid of tiles, and each tile is exported as separate file (one file per selected display form). The bounding box can either be defined by the user through a bounding box filter (see Section 4.6.5). If a bounding box is not provided, the export operation will calculate the bounding box based on the set of top-level features to be exported.

The division of the bounding box into a regular grid of tiles can then be defined in two ways:

1. Automatic tiling
When selecting the Fixed side length option, you can specify the preferred side length for each tile in meters (default: 125m). The bounding box of the export area is then divided by this value to derive the number of rows and columns of the final grid. Note that rounding is applied in this process and thus it cannot be guaranteed that the resulting tiles are perfectly square. For example, assume the bounding box to be exported is 1000m x 1200m, and the preferred side length is set to 300m. Then the final grid will contain 4 x 4 tiles because 1000 / 300 = 3.3 is rounded to 4 and 1200 / 300 = 4. The actual size of each tile will therefore be 250m x 300m in this example.
2. Manual tiling
The second option is to manually define the number of rows and columns.

The export operation will make sure that every top-level feature is only exported once and thus located on precisely one tile to avoid duplicates. For this purpose, the center point of the feature’s bounding box is required to lie either inside or on the left or top border of the tile. As a consequence, also very large or long objects (e.g., bridges) are only assigned to one tile and will become invisible as soon as the tile is unloaded from the viewer, even if parts of the bridge are still close to the camera.

The tile files are organized into a tree structure to be able to easily access them via their row and column indexes. The tree structure is discussed in Section 4.6.

Note

You can visualize the grid and the tile borders in a viewer by choosing to export the bounding boxes of the entire export area and/or of each tile in the General preferences of the export operation (see Section 4.6.7.1). This can be helpful for adjusting the tile sizes.

Attribute filter

The attribute filter lets you define values for the object identifier, gml:name and citydb:lineage, which must be matched by a top-level feature to be exported.

_images/impexp_export_vis_attribute_filter.png

Attribute filter for visualization export operations.

More than one identifier can be provided in a comma-separated list. Multiple gml:name and citydb:lineage values are not supported though.

Both the gml:name and citydb:lineage search strings support two wildcard characters: “*” representing zero or more characters and “.” representing a single character. You can use the escape character “" to escape the wildcards. For example, if you provide *abc for the gml:name filter, then features with a gml:name of “xyzabc” and “abc” will both be exported. If you enter \*abc instead, the gml:name must exactly match “*abc” for the feature to be exported.

SQL filter

The SQL filter offers a powerful way to query top-level features based on a user-defined SELECT statement.

_images/impexp_export_vis_sql_filter_fig.png

SQL filter for visualization export operations.

The SQL query is entered in [1]. The + and - buttons [2] on the right side of the input field allow for increasing or reducing the size of the input field.

In general, any SELECT statement supported by the underlying database system can be used as SQL filter. The query may operate on all tables and columns of the database and may involve any database function or operator. The SQL filter therefore provides a high degree of flexibility for querying content from the 3DCityDB based on your filter criteria.

The only mandatory requirement is that the SQL query must return a list of database ids of the selected city objects. Put differently, the result set returned by the query may only contain a single column with references to the ID column of the CITYOBJECT table. The name of the result column can be freely chosen, and the result set may contain duplicate ID values. Of course, it must also be ensured that the SELECT statement follows the specification of the database system.

The following example shows a simple query that selects all city objects having a generic attribute of name energy_level with a double value less than 12.

select
    cityobject_id
from
    cityobject_genericattrib
where
    attrname='energy_level' and realval < 12

The CITYOBJECT_ID column of CITYOBJECT_GENERICATTRIB stores foreign keys to the ID column of CITYOBJECT. The return set therefore fulfills the above requirement.

Note that you do not have to care about the type of the city objects belonging to the ID values in the return set. Since the SQL filter is evaluated together with all other filter settings on the Export tab, the export operation will automatically make sure that only top-level features in accordance with the feature type filter are exported. For example, the above query might return ID values of buildings, city furniture, windows or traffic surfaces. If, however, only buildings have been chosen in the feature type filter, then all ID values in the result set not belonging to buildings will be ignored. This allows writing generic queries that can be reused in different filter combinations. Of course, you may also limit the result set to specific city objects if you like.

The following example illustrates a more complex query selecting all buildings having at least one door object.

select
     t.building_id
from
     thematic_surface t
inner join
     opening_to_them_surface o2t on o2t.thematic_surface_id = t.id
inner join
     opening o on o.id = o2t.opening_id
where
     o.objectclass_id = 39
group by
     t.building_id
having
     count(distinct o.id) > 0

Caution

Other statements than SELECT such as UPDATE, DELETE or DDL commands will be rejected and yield an error message. However, in principle, it is possible to create database functions that can be invoked with a SELECT statement and that delete or change content in the database. An example are the DELETE functions offered by the 3DCityDB itself (cf. Section 3.5.8). For this reason, the export operation scans the SQL filter statement for these well-known DELETE functions and refuses to execute them. However, similar functions can also be created after setting up the 3DCityDB schema and thus are not known to the export operation a priori. If such functions exist and a user of the Importer/Exporter shall not be able to accidentally invoke them through an SQL query, it is strongly recommended that the user may only connect to the 3DCityDB instance via a read-only user (cf. Section 3.4.2).

Bounding box filter

The bounding box filter takes a 2D bounding box as parameter that is given by the coordinate values of its lower left (xmin, ymin) and upper right (xmax, ymax) corner. It is evaluated against the ENVELOPE column of the CITYOBJECT table.

_images/impexp_export_vis_bbox_filter.png

Bounding box filter for visualization export operations.

Only top-level features whose center point of their envelope is either inside or on the left or top border of the provided bounding box will be exported.

Similar to the CityGML/CityJSON export operation, the coordinate values of the bounding box filter can either be entered manually or chosen interactively in a 2D map window. To open the map window, click on the map button map_select. A comprehensive guide on how to use the map window is provided in chapter Section 4.8.

Note

When choosing a spatial bounding filter, make sure that spatial indexes are enabled (use the index operation on the Database tab to check the status of indexes, cf. Section 4.3.3).

Note

If the entire 3D city model stored in the 3DCityDB instance shall be exported with tiling enabled, then a bounding box spanning the overall area of the model must be provided. This bounding box can be easily calculated on the Database tab (cf. Section 4.3.2).

Feature type filter

With the feature types filter, you can restrict the export to one or more features types by enabling the corresponding checkboxes. Only features of the selected type(s) will be exported.

_images/impexp_export_vis_feature_type_filter.png

Feature type filter for visualization export operations.

The feature type filter only shows top-level feature types. It will automatically contain feature types from CityGML ADEs if a corresponding ADE extension has been correctly registered with the Importer/Exporter (see Section 5.3).

If a predefined CityGML feature type cannot be represented in the LoD that has been chosen by the user for the visualization export on the VIS Export tab (see Section 4.6), this feature type will be greyed out in the filter and thus cannot be selected.

Note

When exporting city object groups, the following additional rules apply:

  1. If only the feature type CityObjectGroup is checked, then all city object groups together with all their group members (independent of their feature types) are exported.
  2. If further feature types are selected in addition to CityObjectGroup, then only group members matching those feature types are exported. Of course, all features that match the type selection but are not group members are also exported.

Caution

Support for terrain data in visualization exports is limited to the TINRelief feature type. Other relief types such as MassPointRelief, BreaklineRelief, and RasterRelief are not supported. Due to the typically large extent of a relief feature it is recommended to export it without tiling. Alternatively, the relief feature can be split into smaller components matching the desired tile size for visualization exports. However, this must be applied to the original terrain data prior to importing it into the 3DCityDB. The Importer/Exporter does not automatically clip terrain data at export time.

Preferences

The visualization export operation can be customized with various different preference settings that are presented in this chapter.

General

The General preferences dialog offers both common and output format-specific settings affecting the visualization export.

_images/impexp_kml_export_preferences_general_fig.png

Visualization export preferences – General settings.

General options

By default, the export process creates one KML file per display form and tile in the Tiles output folder (see Section 4.6). When tiling is disabled, a single KML file is created for each display form instead. The file will either contain KML representations of the exported top-level features in case of KML-specific display forms, or references to COLLADA models in case of the COLLADA/glTF display form (also see Table 4.24). The COLLADA models will be stored in subfolders relative to the KML file. When choosing the option Write to compressed KMZ archive [1], all files per tile and display form are packaged into a single KMZ archive instead. This reduces both the overall size of the exported data and the number of output files and, thus, helps to minimize loading times when sending the visualization data over a network.

Note

The main KML output file specified on the VIS Export tab linking all files in the Tiles output folder (see Section 4.6) is always exported as uncompressed KML file independent of this setting.

Note

The KMZ archive format is only defined for COLLADA models and does not support glTF models. For this reason, the options Write to compressed KMZ archive and the Export in glTF format [2] are mutually exclusive.

With the Show bounding box and Show tile borders options [1], a user can specify whether the bounding box of a) the entire export area and/or b) each individual tile should be displayed in the viewer. If enabled, a separate KML geometry for each bounding box will be contained in the output files.

The Record metadata about exported features in JSON file option [1] lets you create an additional JSON metadata file in the export directory. This file contains the identifier, the envelope (as 2D bounding box in WGS 84) and the tile (as row and column index) of every top-level feature in the visualization export. The JSON file will have the same base name as the main output file specified on the VIS Export tab (see Section 4.6). The following snippet exemplifies the contents of the JSON file.

{
   "BLDG_0003000b0013fe1f": {
      "envelope": [13.411962, 52.51966, 13.41277, 52.520091],
      "tile": [1, 1]
   },
   "BLDG_00030009007f8007": {
      "envelope": [13.406815, 52.51559, 13.40714, 52.51578],
      "tile": [0, 0]
   }
}

This metadata is helpful, for instance, in case a viewer or application dynamically loads and unloads tiles and therefore has always just access to the features on tiles that are loaded. The metadata file can be used as index over all exported features in order to quickly find and access features even if their corresponding tile is not loaded.

Note

Make sure to enable Cross-Origin Resource Sharing (CORS) headers if you want to access the metadata file from a web client using a cross-domain request. CORS headers are additional HTTP headers in the response that allow the web client to access the requested data. This must be enabled on the server that hosts the metadata file.

COLLADA/glTF options

The COLLADA/glTF options [2] only apply to visualization exports in COLLADA and glTF format, which are triggered by choosing the COLLADA/glTF display form on the VIS Export tab.

A viewer typically checks for each polygon of a 3D object whether it is “visible” based on its face orientation. If not visible, the polygon is skipped from the rendering process (so-called backface culling) to reduce the number of polygons that have to be drawn and, thus, to increase the visualization performance. If your data contains polygons with a wrong orientation, they will therefore not be shown in the viewer. To disable backface culling and force the viewer to render all polygons, enable the Force surfaces to be double sided option (default: disabled). Be aware that this might decrease the visualization performance though.

The option Generate surface normals calculates the surface normal for each polygon and stores it in the visualization model (default: enabled). Surface normals play a central role in shading and for the amount of light the polygon reflects in a 3D visualization. The following Fig. 4.60 shows a building model visualized with (left) and without surface normals (right). When exporting textured models, surface normals often do not increase the visual quality. The option may be disabled in this case to reduce the output file size.

_images/impexp_kml_export_surface_normal_comparison_fig.png

Different shadings of the same 3D object with (left) and without surface normals (right).

When working with textured models, texture images are sometimes larger than required for texturing the polygon. Loading massive texture data into a viewer may impact the visualization performance, so you should especially avoid loading unnecessary texture data. For this purpose, the Crop texture images option lets you cut the texture images for each polygon to the minimum required size during export.

Another option to optimize the visualization performance for textured models is to generate texture atlases (default: enabled). Instead of exporting one texture image file per surface to be textured, a texture atlas groups multiple texture images into a single file. This reduces the number of texture image files that have to be sent over the network and loaded by the viewer. The export operation will always create one or more texture atlases per top-level feature, but the same atlas is never shared between different top-level features. The texture atlases can be enforced to be power-of-two sized (default: enabled), which might be required for a viewer to efficiently manage the texture images.

In general, creating a texture atlas for a set of texture images is a combinatorial problem, also known as ‘knapsack problem’ (see [CGJT1980]). Different algorithms have been proposed in literature that differ in speed and packing efficiency. The export operation offers three different algorithms that can be selected in the preferences dialog:

BASIC
This algorithm recursively divides the texture atlas into empty and filled regions (see http://www.blackpawn.com/texts/lightmaps/default.html). The first item is placed in the top left corner. The remaining empty region is split into two rectangles along the sides of the item. The next item is inserted into one of the free rectangles and the remaining empty space is split again. Doing this in a recursive way builds a binary tree representing the texture atlas. When adding an item, there is no information of the sizes of the items that are going to be packed after this one. This keeps the algorithm simple and fast. The items may be rotated when being inserted into the texture atlas.
TPIM
The touching perimeter algorithm (TPIM, see [LoMV1999] and [LoMM2002]) sorts images according to non-increasing area and orients them horizontally. One item is packed at a time. The first item packed is always placed in the bottom-left corner. Each following item is packed with its lower edge touching either the bottom of the atlas or the top edge of another item, and with its left edge touching either the left edge of the atlas or the right edge of another item. The choice of the packing position is done by evaluating a score, defined as the percentage of the item perimeter which touches the atlas borders and other items already packed. For each new item, the score is evaluated twice, for the two item orientations, and the highest value is selected.
TPIM w/o image rotation
Same as TPIM but the rotation of images is not allowed when packing. The score is, thus, evaluated only once since only one orientation is possible. This variant is faster but less efficient compared to TPIM.

From these algorithms, BASIC is the fastest (shortest generation time) and produces good results, whereas TPIM is the most efficient (highest ratio of used area of the total atlas size) but also the slowest.

Caution

If you already imported texture atlases into the 3DCityDB, they can, of course, be used “as is” for the visualization export. Simply deactivate both the Crop texture images and the Generate texture atlases in this case. However, if the original texture atlases were created such that they are shared by more than one top-level feature, they will be exported redundantly for each top-level features and a viewer has to load the same texture atlas multiple times. To avoid this, it is recommended to both crop the original texture atlases and let the exporter generate new texture atlases from the cropped images.

To scale texture images is another means of reducing file size and increasing loading time. A scale factor between 0.2 and 0.5 often still offers a fairly good image quality while improving the visualization performance. The default value is 1.0, which means no scaling. This setting is independent from the generation of texture atlases and both can be combined.

Instead of exporting each top-level feature as individual COLLADA and/or glTF model, you can also choose to group multiple features into one model. Similar to texture atlases, this can help to reduce the number of individual models and files to be sent over the network and, thus, to improve loading times and visualization performance in the viewer.

Note

Only the first feature in a group is placed on the terrain model. All other features will receive local coordinates relative to this first feature. This might result in a wrong position on the earth surface if the features in a group are far away from each other.

Export in glTF format

In addition to COLLADA models, the Importer/Exporter can also export 3D contents from the database in glTF format. Technically, the top-level features are exported as COLLADA first and then converted to glTF using the open source COLLADA2glTF converter tool. There is a growing support for glTF models in various applications and especially in WebGL-enabled web engines and viewers.

Simply enable the Export in glTF format option to let the export operation create glTF models. This also makes additional glTF-specific settings available in the preferences dialog [3].

The mandatory COLLADA2glTF converter is available for Windows, Linux, and Mac OS X and is automatically installed with the Importer/Exporter. It can be found in the folder contribs/collada2gltf within the installation directory (see also Table 1.1). By default, this pre-installed converter is used by the export operation. You can choose to use another version of the COLLADA2glTF converter, in which case you have to provide the path to the executable in the Converter text field. Note that at least version 2.1 of the converter is required to be able to export both in glTF version 1.0 and 2.0.

By default, the glTF models are exported in addition to the COLLADA models. If the COLLADA models are not required in your subsequent processes though, they can be deleted after conversion by enabling the corresponding option to reduce the size of the overall export.

When exporting textured models, the texture images can either be exported as separate files relative to the glTF model (default) or be embedded in the glTF file. In the latter case, the base 64 encoded texture data is written to the glTF file. Both options have their pros and cons: if not embedded, viewers can first load and render the geometry and apply the texture images afterwards. This might result in a better user experience since content becomes visible quickly. On the other hand, all texture images have to be requested and loaded individually over the network, which might impair the overall visualization performance.

The exported glTF files can be further converted and compressed to binary glTF format. This can additionally help to increase loading and processing times.

The export operation supports both glTF version 1.0 and 2.0. The current version 2.0 of the glTF format is supported by most applications and tools and, thus, is used for the export by default. It also offers the possibility to compress the geometry data using the Google Draco compression technology, which significantly reduces the size of the output glTF files. For this reasons, the Draco compression is enabled by default but can be disabled by the user if required.

Put every feature in its an own KML region

With the Put every feature in its an own KML region option [4] enabled, an individual KML <Region> is created for each exported top-level feature. While the visible from settings on the VIS Export tab (see Section 4.6) only affect the visibility of each tile (or of the entire export area in case tiling is disabled), this options allows you to define additional and more fine-grained visibility settings that are applied to the KML <Region> element of each feature. The envelope of each top-level feature as stored in the database is used to define the spatial extent of this region.

When enabled, the Visible from parameter lets you control the visibility of each feature. The meaning of this parameter is identical to those on the VIS Export tab: When the feature is projected onto the screen in the viewer, it must occupy an area of the screen that is greater than the number of pixels specified here (in square pixels) in order to become visible. Note that this value is applied to all features the same.

The field View refresh mode specifies how the content of the KML <Region> is refreshed when the camera view changes. The following values are defined in KML and can be chosen in the preferences dialog:

Supported view refresh modes
Refresh mode
Description
never
Ignore changes in the view.
onRequest
Refresh the content only when the user explicitly requests it.
onStop
Refresh the content n seconds after movement stops, where n is specified in the field View refresh time parameter.
onRegion (default)
Refresh the content when the <Region> becomes active.

As stated above, the parameter View refresh time specifies the number of seconds to wait after movement stops before the content of the KML <Region> is refreshed. This field is only active when the View refresh mode is set to onStop.

Note

The visibility settings per KML <Region> and top-level feature require that a viewer fully supports this KML mechanism (like, for example, Google Earth). The Cesium-based 3D web map client shipped with the 3D City Database (see Section 6) does not support this mechanism and, thus, silently ignore these settings.

Styling

The Styling preferences allow you to define colors and symbols to be used for the visualization of the exported city objects. Styles can be defined for each top-level feature type separately on different subnodes of the Styling preference node. If an ADE extension is registered with the Importer/Exporter that supports exporting ADE features for visualization, additional subnodes for each ADE top-level feature type are automatically added and available.

Styling of surface and solid geometries

The styling options for surface and solid geometries of city objects are explained in the following using the preferences dialog of the Building feature type as shown below. This dialog offers the most settings of all feature types. The only exceptions are the dialogs for GenericCityObject and ADE top-level feature types, which provide additional styling options for point and curve geometries (see Section 4.6.7.2.2).

_images/impexp_kml_export_preferences_rendering_building_fig.png

Visualization export preferences – Styling options.

The dialog lets you define different styles for the different display forms offered by the visualization export (see Section 4.6). The styling options are therefore grouped into Footprint and Extruded [1], Geometry [2], and COLLADA/glTF [3].

Default style and highlight style

All styling options for the different display forms have in common that you can specify a default style and an optional highlight style. For feature types that can be modelled with roof surfaces such as buildings, bridges and tunnels an additional roof style is available for the Geometry and COLLADA/glTF display forms.

A style consists of a fill color and an outline color that are used for rendering the surfaces of a corresponding city object. Only in case of the COLLADA/glTF display form, outline colors are not supported for the default and roof styles. To choose a color, simply click on the button for the fill or outline color. This brings up a default color selection dialog where you can either pick a color from a palette or enter a HSV, HSL, RGB or CMYK color code. Note that you can also define an alpha/transparency value for your color. After accepting the selected color, it is also used as new color for the button in the dialog. This way, you have an immediate visual feedback of the color that will be applied in the visualization export.

Both the default style and the roof style are immediately applied when a city object is loaded and becomes visible in the viewer. The highlight style, however, is only shown when hovering the mouse over a city object. This highlighting effect is realized by duplicating all surfaces of the feature and assigning the chosen highlight color to these duplicates. The resulting “highlight geometry” is stored as additional KML geometry for each feature and is loaded together with the feature but set to invisible by default.

For the display forms Geometry and COLLADA/glTF, an additional Surface distance value between 0 and 10 meters (default: 0.75) can be defined to make sure that the original surfaces and their duplicates are not coincident. The highlight surface is moved along the direction if its face normal by the provided distance value. This way, the highlight geometry realizes an “exploded” view of the city object, which helps to avoid Z-fighting effects.

_images/impexp_kml_export_mouseover_highlighting_fig.png

Example of the highlight style in Google Earth.

Note

Since the highlight geometry duplicates the original feature geometry, it has a significant impact on loading times and may also impair the general visualization performance.

Note

The highlight style has been mainly developed for use with Google Earth and directly works with this viewer. The 3D web map client shipped with the 3D City Database (see Section 6) does not support the highlight geometry but uses Cesium functionalities for object highlighting. If you intend to use the export with the 3D web map client only, disable the highlight geometry.

Footprint and Extruded styling options

In addition to the definitions of colors for the different styles, the Footprint and Extruded settings let you choose the LoD0 geometry that shall be used for rendering buildings when LoD0 is used as target LoD for the visualization export on the VIS Export tab. Note that this selection is only available for the Building feature type but not for other feature types. The following options are available for the LoD0 geometry:

  • Footprint: The footprint geometry of buildings will be used for visualization.
  • Roofprint: The roofprint geometry of buildings will be used for visualization.
  • Roofprint - use footprint as fallback: Roofprint geometries will be used in the first place. If no roofprint geometry is available, the footprint geometry is used as fallback.

Geometry styling options

As mentioned above, an additional roof style can be defined for the Geometry display form if the corresponding feature type supports roof surfaces (i.e., buildings, bridges, and tunnels). The roof style is applied to the surface geometries of all roof surfaces modelled for the city object. The default style is used for all other surfaces.

Especially for buildings, users often want to have roofs painted in a different color than walls even if roof surfaces are not explicitly modelled for the building. For this reason, the export process uses simple heuristics to identify roofs purely based on the geometry of the building: surfaces touching the ground (i.e., having the lowest height values) are considered to be walls and receive the default style. All other surfaces are classified as roofs and receive the roof style.

Note

The automatic detection of roof surfaces is only applied to buildings, and only if the building is to be exported in LoD1 or LoD2. Surfaces that are explicitly modelled as roof or wall surface in the model are, of course, excluded from the automatic detection.

COLLADA/glTF styling options

In contrast to the display forms Extruded, Footprint and Geometry, a user can select an appearance theme for COLLADA/glTF exports on the VIS export tab to apply appearance data stored in the 3DCityDB to the surfaces of the exported city objects (see Section 4.6). Appearance data not only comprises textures but also colors. Thus, textures and colors assigned to a surface through the selected appearance theme always take precedence over the styles defined for the COLLADA/glTF display form on this preferences dialog.

As mentioned above, outline colors cannot be defined for the default style and the roof style because they are not supported by the COLLADA and glTF formats. Similar to the Geometry styling options, the roof style is only available for feature types that support roof surfaces, and an automatic roof detection is applied to buildings in accordance with the rules discussed above.

Styling of point and curve geometries

GenericCityObject features can also be modelled and stored with point and curve geometries in addition to a surface-based representation. Point and curve geometries can be exported in KML format by the visualization export operation. Thus, they are supported by the KML-based display forms Footprint, Extruded and Geometry but not when exporting to COLLADA/glTF.

The styling preferences for the GenericCityObject feature type therefore not only provide a dialog for surface and solid geometries as described above, but also for point and curve geometries as shown below.

_images/impexp_kml_export_point_curve_rendering.png

Styling options for point and curve geometries of GenericCityObject features.

Note

Similar to the GenericCityObject feature type, also the preferences for ADE top-level feature types offer styling options for surface and solid geometries and for point and curve geometries.

Point styling options

Point geometries are visualized by rendering a symbol at the point location. You can choose between a cross, an icon and a cube symbol as visual representation of the point geometry [1]. For each symbol, a separate default style and optional highlight style can be defined.

The Altitude Mode parameter specifies how the height values (altitude) of the exported point geometries are to be interpreted by a viewer or application.

Supported altitude modes for point geometries
Altitude mode
Description
relative
The altitude is interpreted as a value in meters above the terrain. The absolute height value can be determined by adding the terrain elevation.
absolute
The altitude is interpreted as an absolute height value in meters according to the vertical reference system (EGM96 geoid in Google Earth, WGS84 ellipsoid in Cesium).
clamp to ground
The altitude will be ignored and the point geometry will always be clamped to the ground.

The three different symbol types offer the following additional settings.

Cross
When choosing the cross symbol, the point geometry is visualized as crosshair with a length of approx. 2 meters. This length value cannot be customized. However, you can pick an additional line width for both the default and the highlight style.
_images/impexp_kml_export_example_cross_fig.png

A point geometry displayed as cross symbol.

Icon
The icon symbol uses a KML <Placemark> element with the point geometry and displays an icon at this position. Besides the color of the icon, also its size can be defined for both the default and the highlight style using the Scale parameter, where the default value 1 means no scaling.
_images/impexp_kml_export_example_icon_fig.png

An point geometry displayed as icon symbol.

Cube
The third symbol for representing a point geometry is cube. In this case, a small cube is rendered at the point location. You can define the the side length for the cube for the default style, which will also be applied to the highlight style. In addition, you can define the Line width of the cube’s outline for the highlight style (a value of 1.0 is used for the default style).
_images/impexp_kml_export_example_cube_fig.png

A point geometry displayed as cube symbol.

Curve styling options

The styling options for curve geometries [2] are similar to those for the cross symbol of point geometries. And like with point geometries, you can also define a separate Altitude mode for curve geometries. The values and their meanings are identical to those described above in Table 4.21.

Note

When displaying curve geometries in Google Earth, the altitude modes absolute and relative may result in curves intersecting with or hovering over the terrain. Please choose clamp to ground if you want the curve geometries to be draped on the terrain instead.

The following figure exemplifies the visualization of point and curve geometries in Google Earth. It shows an indoor routing network for a building on the campus of the Technical University Munich (TUM). The nodes and edges of this network are modelled as GenericCityObject features with 3D point and curve geometries. The nodes are displayed using the cube symbol.

_images/impexp_kml_export_tum_vis_example_fig.png

Visualization of a network model of the building interior of Technical University Munich (TUM).

Balloon

KML offers the possibility to enrich a placemark with a pop-up window containing additional information about the placemark. This pop-up window is called balloon and is shown when clicking on the placemark. The visualization export operation supports adding balloons to the exported city objects in all display forms.

Note

The 3D web map client shipped with the 3D City Database (see Section 6) does not support KML balloons. If you want to use this viewer, disable the creation of balloons. The 3D web map client offers alternative ways for showing additional information about a city object when clicking on it. For instance, you can display attributes coming from an online spreadsheet (e.g. Google Docs).

Balloons can be defined for each top-level feature type separately on different subnodes of the Balloon preference node. This allows to configure different balloon contents for the different feature types, and some feature types may even be exported without balloons. If an ADE extension is registered with the Importer/Exporter that supports exporting ADE features for visualization, additional subnodes for each ADE top-level feature type are automatically added and available.

The balloon preferences are illustrated in the following figure for the Building feature type. The dialogs for GenericCityObject and ADE top-level features additionally offer the possibility to create balloons for their point and curve representations. The settings for these additional balloons are, however, identical to the ones shown below.

_images/impexp_kml_export_preferences_balloon_building_fig.png

Visualization export preferences – Balloon settings.

In general, the content to be displayed inside a KML balloon can be a simple string or an HTML document. The HTML content can be styled using external Cascading Style Sheet (CSS) and may contain JavaScript code.

The balloon options offer two different ways to provide the balloon content for a city object. First, you can choose to use the content from the generic attribute Balloon_Content. This option offers a lot of flexibility because the content can be different for each city object. Second, you can use the content from a file. This way you can define a balloon template in an external file, and this template is used for all city objects. Thus, all balloons will share the same layout and content. Simply provide the path to the template file in the corresponding text field or click the Browse button to open a file selection dialog. Example HTML template files are provided in the subfolder templates/balloons of the Importer/Exporter installation directory.

Both options can also be combined such that the external template file is only used as fallback in case a city object lacks the the generic attribute Balloon_Content.

By default, the balloons are included in the KML output files generated by the export operation for each visualization model. Alternatively, you can choose to export them into a separate file per feature by enabling the corresponding option. In the latter case, they are stored inside a balloons directory relative to the visualization model. When storing the balloons in separate files, you must make sure that these files can be accessed from your viewer. For example, when using Google Earth, access to local files must be granted in the Tools -> Options -> General settings.

Note

When using Google Earth as viewer and COLLADA/glTF as display form, it is recommended to use a highlight style for those feature types that shall receive a balloon. The balloon is then attached to the highlight geometry. In constrast to the COLLADA model, the highlight geometry is directly clickable in the scene.

The balloon content does not need to be static. Instead, it can contain references to data stored in the 3DCityDB for each individual city object. These references are dynamically resolved at export time and replaced with the actual value from the database. The references must be put inside a <3DCityDB> element within the balloon template so that they can be recognized by the export operation. This works both in case the content is stored in the generic attribute or an external file. As mentioned above, balloon content provided in HTML may also contain JavaScript code. If you use JavaScript for your balloon, you can also use <3DCityDB> placeholders inside your code.

The <3DCityDB> element offer a rich set of expressions and keywords that can be used to precisely describe the data you want to fetch from the database. The use of these expression to create dynamic balloon content is discussed in detail in Section 4.6.8. The following figure shows an example of a balloon that contains feature-specific attributes and information that are dynamically populated during the export using <3DCityDB> references.

_images/impexp_kml_export_balloon_dynamic_contents_fig.png

Dynamic balloon content showing feature-specific attributes and information.

Elevation

The Elevation preferences dialog offers settings to adapt the height values of exported city objects in the resulting visualization models. This also has an impact when rendering the models in a viewer.

_images/impexp_kml_export_preferences_terrain_fig.png

Visualization export preferences – Elevation settings.

Altitude mode

The altitude mode parameter [1] specifies how the height values (altitude) of the exported city objects are to be interpreted by a viewer or application. It can be set to either absolute (default), relative or clamp to ground.

Supported altitude modes for city objects
Altitude mode
Description
absolute
The altitude is interpreted as an absolute height value in meters according to the vertical reference system (EGM96 geoid in Google Earth, WGS84 ellipsoid in Cesium).
relative
The altitude is interpreted as a value in meters above the terrain. The absolute height value can be determined by adding the terrain elevation.
clamp to ground
The altitude will be ignored and the city object will always be clamped to the ground.

To pick the right mode, it is important to understand how the height values have been derived for the city objects stored in the database. Nowadays, absolute height values are more commonly used in the creation of 3D city models than relative height values. Either way, the chosen method should be consistent for all features in the database.

If you choose an altitude mode that does not fit your height values, this might obviously result in city objects flying over or sinking into the terrain in a viewer. This effect can, however, also occur if your viewer uses another terrain model than the one used for deriving the height values for your city objects. In the best case, the viewer lets you use your own terrain model for visualization that matches your city objects.

Caution

The altitude mode setting is ignored by the Cesium-based 3D web map client shipped with the 3D City Database (see Section 6). The height values are always interpreted as absolute values by this viewer.

Note

The altitude mode settings will only be considered when the city objects are exported in the Geometry or COLLADA/glTF display forms. When choosing the Footprint display form instead, clamp to ground is enforced automatically to ensure that the exported footprints are always draped onto the terrain. Likewise, when exporting in the Extruded display form, the relative mode will be used independent of the choice made in this dialog.

In addition to the altitude mode, you can also specify to use the original height values without transformation in the export process. When creating the visualization models, the coordinates of the city objects are internally transformed to WGS84, for instance, to be used in KML geometries. Depending on the spatial database (PostgreSQL/PostGIS, Oracle) and its version used for running the 3DCityDB, this transformation will not only affect the x and y coordinates but also the height values. If, for instance, the city objects have absolute height values and you use a matching terrain in the viewer, then changes to the height values due to transformations might again result in flying or sinking city objects. With this option enabled (default), the export operation will always keep the original height values as stored in the database in the entire process.

Height offset

In addition to the altitude mode, a height offset [2] can be specified that is added to every z coordinate of all geometries that will be exported. If you do not want to apply a height offset, simply choose the No offset option which is also the default value.

The Constant offset option lets you define a constant value, positive or negative, and every height value is incremented by this value. This option is particularly useful when exporting only a single city object or in case all height values share a global error that can be easily corrected with a constant offset.

When choosing the Move every object on the globe (zero elevation) option, the lowest point for each city object will be determined and the height value of this point will be set to 0. The same offset will then be applied to the remaining points of the city object. In contrast to a constant offset, this method uses an individual offset for each city object. It makes sense to combine this option with the relative altitude mode.

Note

When using Google Earth as viewer, moving every object to zero elevation and combining this with a relative altitude mode results in city objects that are perfectly sitting on the ground independent of whether the terrain layer is activated or not.

For the Cesium-based 3D web map client, this option can be used to render the city objects on the WGS84 ellipsoid. However, as soon as you load a terrain, remember that the height values are interpreted as being absolute (see above) by this viewer. Thus, the city objects will most likely be located under the terrain.

A last option available for choosing a height offset is to use the value stored in the generic attribute GE_LoDn_zOffset of the city object. Since the value can be different for different city objects, also this options can be used to adapt the height value for each city object individually.

This option is mainly intended to be used with Google Earth as viewer. For the terrain model used in Google Earth, an Elevation API is offered by Google that lets you query the height value of the terrain for every location on earth. The idea is to use this service to calculate an individual offset for each city object so that the city object perfectly sits on the Google Earth terrain. For this purpose, the terrain height is queried for all points of the city object that have the lowest z coordinate value. The offset is then chosen so that the city object is moved to the lowest terrain height returned for these points. The Elevation API is however only queried, if you enable the Query the Google Elevation API when no data is available option (default: false).

Using the Elevation API service requires internet access and might be time consuming if you export a large number of city objects or even if you export a smaller scene multiple times. For this reason, the export operation automatically stores the determined offset as generic attribute GE_LoDn_zOffset for every city object in the database. Before querying the Elevation API, it will first be checked whether this attribute exists. If so, the value of the attribute will be directly used as height offset without querying the Elevation API again for this city object.

Since city objects may have different geometries with different height values in different LoDs, the height offset must be stored for each LoD separately. For this purpose, the n in GE_LoDn_zOffset is meant as placeholder for the LoD the offset should be applied to. Thus, the attribute names stored in the database are actually GE_LoD1_zOffset, GE_LoD2_zOffset, etc.

Storing the height offset as generic attribute with the city object also ensures that this information is available when exporting the city object in CityGML/CityJSON format and therefore can be consumed by other applications or even be transported across 3DCityDB instances. Moreover, you can manually adjust the offsets at any time in the database or even delete the attribute so that it will be calculated anew with the next visualization export.

Note

Although the GE_LoDn_zOffset attribute is mainly intended to be used with Google Earth as viewer and to be automatically populated using the Google Elevation API, it is not restricted to this use case. In contrast, you can also use different data and algorithms to calculate a height offset that is specific to each city object and store your results in GE_LoDn_zOffset. The export operation will consider and apply these height offsets just the same if you enable the Use generic attribute GE_LoDn_zOffset option. Thus, the way how the GE_LoDn_zOffset attribute is populated is irrelevant.

Caution

Starting from July 2018, the Elevation API cannot be used for free anymore but requires an API key to be able to query the service. Thus, the option Query the Google Elevation API when no data is available should only be enabled when a valid Elevation API key is available. Users can provide their own Elevation API key in the general preferences as described in Section 4.7.6. Please refer to https://cloud.google.com/maps-platform/terms/ for more details on the Google Maps Platform Terms of Service.

Caution

Be careful if you have already calculated the GE_LoDn_zOffset attribute and want to export and re-import your city objects into another 3DCityDB instance. If this target 3DCityDB is running on a different spatial database system or a different version of the same system, applying the height offset from GE_LoDn_zOffset might give you different results. The reason is that z coordinates might or might not be changed in coordinate transformation by the different spatial database systems (see discussion above). Results will be consistent in case the target database system is identical our if you always make sure to keep the original height values in the export process.

Dynamic balloon content

The visualization export operation supports adding KML balloons to the exported city objects (see Section 4.6.7.3). The content of these balloons can contain <3DCityDB> elements to reference data stored in the 3DCityDB for each individual city object. These references are dynamically resolved at export time and replaced with the actual value from the database. The <3DCityDB> element offer a rich set of expressions and keywords that can be used to precisely describe the data you want to fetch from the database.

Rules for simple expressions
  • Expressions must begin with <3DCityDB> and end with </3DCityDB>. Expressions are case-insensitive.
  • Expressions are coded in the form "TABLE/[AGGREGATION FUNCTION] COLUMN [CONDITION]". The table and column tokens are mandatory and must exist in the 3DCityDB schema (see Section 3.2.16.1). Both the aggregation function and the condition are optional. When present they must be written in square brackets. The expressions are mapped to SQL statements of the form: SELECT (AGGREGATION FUNCTION) COLUMN FROM TABLE (WHERE CONDITION).
  • Each expression will only return those entries relevant to the city object being currently exported. For this purpose, a filter condition of the form "TABLE.CITYOBJECT_ID = CITYOBJECT.ID" is always added automatically to each expression and, thus, is not required to be added by the user manually.
  • Results will be returned as comma-separated list. A list can be empty or contain one ore more items satisfying the expression. When only interested in the first entry in a list, the aggregation function FIRST can be used. Other possible aggregation functions are LAST, MAX, MIN, AVG, SUM and COUNT.
  • A condition can simply be an index to access a specific entry from the result list. Alternatively, it can be a comparison expression involving a column name, a comparison operator and a value. For instance: [2] or [NAME = 'abc'].
  • Invalid results will be silently discarded. Valid results will be delivered exactly as stored in the 3DCityDB tables. You can use JavaScript functions (e.g., substring()) to post-process and adapt the results to your needs.
  • All items in the result list are always of the same data type which corresponds to the data type of the corresponding column in the database. If different result types must be placed next to each other in a balloon, then different <3DCityDB> expressions must be used.
Special keywords

In addition to simple expressions, you can also use SPECIAL_KEYWORDS that are predefined placeholders for object-specific content. These keywords refer to data that is not retrieved “as is” in a single step from a table in the 3DCityDB but has to undergo some preprocessing steps (not achievable by simple JavaScript means) in order to calculate the final value before being exported to the balloon. A typical preprocessing step is the transformation of coordinates into a CRS different from the one defined for the 3DCityDB instance.

The following table lists the available special keywords.

Available special keywords
Keyword
Description
CENTROID_WGS84
coordinates of the object’s centroid in WGS84 in the following order:
longitude, latitude, altitude
CENTROID_WGS84_LAT
latitude of the object’s centroid in WGS84
CENTROID_WGS84_LON
longitude of the object’s centroid in WGS84
BBOX_WGS84_LAT_MIN
minimum latitude value of the object’s envelope in WGS84
BBOX_WGS84_LAT_MAX
maximum latitude value of the object’s envelope in WGS84
BBOX_WGS84_LON_MIN
minimum longitude value of the object’s envelope in WGS84
BBOX_WGS84_LON_MAX
maximum longitude value of the object’s envelope in WGS84
BBOX_WGS84_HEIGHT_MIN
maximum longitude value of the object’s envelope in WGS84
BBOX_WGS84_HEIGHT_MAX
maximum height value of the object’s envelope in WGS84
BBOX_WGS84_LAT_LON
all four latitude and longitude values of the object’s envelope in WGS84
BBOX_WGS84_LON_LAT
all four longitude and latitude values of the object’s envelope in WGS84

In order to use these keywords in your balloon template, you must apply the following rules:

  • Expressions for special keywords are case-insensitive. Their syntax is similar to simple expressions: start and end are marked by <3DCityDB> and </3DCityDB> tags, the table name must be SPECIAL_KEYWORDS and the column name must be one of the predefined keywords listed in the Table 4.23 above.
  • No aggregation functions or conditions are allowed for SPECIAL_KEYWORDS. If present they will be interpreted as part of the keyword and therefore not recognized.
Examples for simple expressions

The following examples illustrate the use of simple expressions in your balloon template.

  • <3DCityDB>ADDRESS/STREET</3DCityDB> returns the content of the STREET column of the ADDRESS table for this city object.
  • <3DCityDB>BUILDING/NAME</3DCityDB> returns the content of the NAME column of the BUILDING table for this city object.
  • <3DCityDB>CITYOBJECT_GENERICATTRIB/ATTRNAME</3DCityDB> returns the names of all generic attributes for this city object. The names will be provided as comma-separated list.
  • <3DCityDB>CITYOBJECT_GENERICATTRIB/REALVAL[ATTRNAME = 'H_Trauf_Min']</3DCityDB> returns the value stored in the column REALVAL for the generic attribute having the name H_Trauf_Min for this city object.
  • <3DCityDB>APPEARANCE/[COUNT]THEME</3DCityDB> returns the number of appearance themes for this city object.
  • <3DCityDB>APPEARANCE/THEME[0]</3DCityDB> returns the first appearance theme for this city object.
  • <3DCityDB>SPECIAL_KEYWORDS/CENTROID_WGS84</3DCityDB> returns the centroid of the city object in WGS84.

<3DCityDB> expressions cannot only be used to generate plain text in a balloon, but also to create any valid HTML content such as hyperlinks or embedded images:

  • <a href="<3DCityDB>EXTERNAL_REFERENCE/URI</3DCityDB>">Click here for more information</a> creates a hyperlink to the object’s external reference.
  • <img src="<3DCityDB>CITYOBJECT_GENERICATTRIB/URIVAL[ATTRNAME=\'Illustration\']</3DCityDB>" width=400> adds an image to the balloon whose URL is taken from the generic attribute “Illustration”.

For the last example, assume that the “Illustration” attribute contains an URL to an image of the Pergamon Museum in Berlin on Wikipedia. In the balloon of the corresponding city object, this would be resolved to a valid <img> tag and displayed in the viewer as shown below.

<img src="https://upload.wikimedia.org/wikipedia/commons/d/d1/FrisoaltarPergamo.jpg" width=400>
_images/impexp_kml_export_balloon_embedded_image_fig.png

Dynamically generated balloon containing an embedded image (image taken from Wikipedia).

Rules for iterative expressions

Simple expressions are sufficient for most use cases, when only a single value or a list of values from a single column is needed. However, sometimes the user will need to access more than one column at the same time with an unknown amount of results. For such scenarios, iterative expressions are available. An example use case is to iterate over all all generic attributes available for the city object to print both their names and values.

The general syntax of iterative expressions is shown in the snippet below.

<3DCityDB>FOREACH TABLE/COLUMN[,COLUMN...][CONDITION]</3DCityDB>
  ...access the returned values here using the tokens %1, %2, etc. ...
<3DCityDB>END FOREACH</3DCityDB>

You can provide one or more columns for the FOREACH statement. The values of the separate columns are accessed using the tokens %1, %2, etc., where the number refers to the index of the column in the list. The first column is assigned the index 1. The token %0 can be used to retrieve the current row number.

The following additional rules apply to iterative expressions:

  • No aggregation functions are allowed for iterative expressions. The number of columns is not limited, but all of them must belong to the same table. A condition is optional. Similar to simple expression, an implicit condition is always added to make sure that the data belongs to the currently exported city object.
  • The FOREACH loop iterates over all returned values. If values shall be skipped when displaying the balloon, this must be achieved by JavaScript means.
  • The generated balloon content will have as many repetitions of the HTML/JavaScript code between the FOREACH and END FOREACH tags as values returned from the query.
  • Additional simple expressions or SPECIAL_KEYWORDS are not allowed inside the FOREACH and END FOREACH tags.
  • FOREACH expressions must not be nested.

The following example shows how to use a FOREACH expression to list all generic attributes of a city object in the balloon. A JavaScript function is used to display the value of a generic attribute as tool tip when hovering with the mouse over the attribute name. This example is also provided as template file in the templates/balloons folder within the installation directory of the Importer/Exporter.

<script type=”text/javascript”>
function ga_value_as_tooltip(attrname, datatype, strval, intval, realval) {

document.write(“<span title=””); switch (datatype) {

case “1”: document.write(strval);
break;
case “2”: document.write(intval);
break;
case “3”: document.write(realval);
break;

default: document.write(“unknown”);

}; document.write(“”>” + attrname + “</span>”);

} <3DCityDB>FOREACH

CITYOBJECT_GENERICATTRIB/ATTRNAME,DATATYPE,STRVAL,INTVAL,REALVAL</3DCityDB> ga_value_as_tooltip(“%1”, “%2”, “%3”, “%4”, “%5”);

<3DCityDB>END FOREACH</3DCityDB>

</script>

_images/impexp_kml_export_balloon_dynamic_contents_fig.png

Dynamic balloon content showing the list of generic attributes and their values as tool tip.

Recommendations

This chapter provides general recommendations and best practices for using the visualization export operation of the Importer/Exporter and for loading exports with Google Earth and Cesium.

General settings

Depending on the quality and complexity of 3D city model content stored in the 3DCityDB, export results may greatly very in visual quality and loading performance. Experimenting will be required in most cases for a fine-tuning of the export parameters. However, some general rules apply for almost all cases:

  • Using KMZ as output format is recommended when the files will be accessed over a network and the selected display form is either Footprint, Extruded, or Geometry. When exporting glTF models, writing KMZ files is not supported though.
  • Visibility values for the different display forms should be increased in steps of around one third of the tile side length.
  • Setting visible from to 0 pixels (always visible) should be avoided, especially for large or complex exports, because otherwise the viewer will immediately load all data as it must be visible at once.
  • Be careful that tile files do not become too large, otherwise the viewer may become unresponsive. You can influence the tile file size by changing the side length of tiles (whether tiling is automatic or manual). Tests with Google Earth showed that files should be smaller than 10MB. To meet this limit for crowded areas with many city objects, a tile side length between 50 and 100m seemed to be usable.
  • When not exporting in the COLLADA/glTF display form, files will seldomly reach this 10MB limit, but viewers might still become unresponsive if the file loaded contains a lot of polygons, so do not use too large tiles for Footprint, Extruded and Geometry exports even if the resulting files are comparatively small.
  • Do not choose too small tile sizes, many of them may become visible at the same time and render the tiling advantage useless.
  • Using texture atlas generation when producing COLLADA/glTF display form exports always results in faster loading times.
  • From these algorithms, BASIC is the fastest (shortest generation time) and produces good results, whereas TPIM is the most efficient (highest ratio of used area of the total atlas size) but also the slowest.
  • Texture images can often be scaled down to 0.2 - 0.5 without noticeable quality loss. This depends, of course, on the quality of the original textures.
  • Using a highlight style puts the same polygons twice in the resulting export files, one for the features themselves, one for their highlight geometry. This has a negative impact on the viewing performance. The more complex the city objects are, the worse the impact. When highlighting is enabled for exports based on model in LoD3 or higher, especially Google Earth may become quite slow.
  • If you want to use the 3DCityDB web map client to visualize the exported datasets, a highlight style should not be used since object highlighting is supported by the web map client in another way without the need for extra highlight geometries.
  • The 3DCityDB web map client allows for on-the-fly activating and deactivating shadow visualization for 3D objects exported in the glTF format. However, this functionality is currently not available when viewing KML models exported in the Footprint, Extruded, and Geometry display forms.
  • Balloon generation is slightly more efficient when a single template file is applied for all exported objects.
  • When exporting in the Footprint or Extruded display forms, the altitude mode settings will be silently ignored but an appropriate value will be automatically chosen to ensure that the exported objects will be properly placed on the terrain. However, when exporting in the Geometry or COLLADA/glTF display forms, the altitude mode settings must be deliberately chosen with regard to the viewer to be used.
  • In most cases, the combination of the relative altitude mode with the Move every object on the globe (zero elevation) option for height offsets allows for properly placing objects on the terrain. However, when using the Cesium-based 3DCityDB web map client, its default WGS84 ellipsoid terrain model must be activated.
  • When your city objects are stored with absolute height values in the database and you intend to use Google Earth as viewer, you should consider to use the following options (given that you own an API key for the Google Elevation service): 1) absolute altitude mode, 2) use the generic attribute GE_LoDn_zOffset for height offset, and 3) *query the Google Eleveation API”.

Loading exports in Google Earth and Cesium

In order to make full use of the features and functionalities provided by Google Earth, it is highly recommended to use the enhanced version of Google Earth – Google Earth Pro which is available free of charge starting from January 2015. Some of the features described in this documentation, like highlighting, can also flawlessly work in the normal Google Earth with version 6.0.1 or higher.

Displaying a file in Google Earth can be achieved by opening it through the menu (”File”, “Open”) or double-clicking on any KML or KMZ file if these extensions are associated with the program (default option at Google Earth’s installation time).

Loaded files can be refreshed when generated again after loading (if for example the balloon template file was changed) by choosing the “Revert” option in the context menu on the sidebar. There is no need to delete and load them again or shutdown or restart the Earth browser.

For best performance, cache options (”Tools”, “Options”, “Cache”) should be set to their maximum values, 1024 MB for memory cache size, 2000 MB for disk cache. Actual maximums may be lower depending on the computer’s hardware.

Google Earth enables showing the terrain layer by default for realistic display of 3D models. Disabling of terrain layer is only possible in Google Earth Pro. You may need to disable the terrain layer in case that the exported models cannot be seen although shown as loaded in Google Earth’s sidebar, since they are probably buried into the ground (see Section 4.6.7.4).

When exporting balloons into individual files (one for each object) written together into a balloons directory access to local files and personal data must be allowed (”Tools”, “Options”, “General”). Google Earth will issue a security warning that must be accepted, otherwise the contents of the balloons (when in individual files and not as a part of the doc.kml file) will not be displayed.

It is also possible to upload the generated KML/COLLADA/glTF files to a web server and access them from there via internet browser with Cesium Virtual Globe (starting from December 2015, the Google Earth Plugin is no longer supported by most modern web browsers due to security considerations). In this case, the Cross Origin Resource Sharing (CORS) shall be enabled on the web server to allow cross-domain AJAX requests sent from the based-web frontend.

Note

Starting with version 7 (and at least up to version 7.1.1.1888) Google Earth has changed the way transparent or semi-transparent surfaces are rendered. This is especially relevant for visualizations containing highlighting surfaces (explained in Section 4.6.7.2). When viewing KML/COLLADA models in Google Earth it is strongly recommended to use Google Earth (Pro) version 7 or higher and switch to the OpenGL graphic mode for an optimal viewing experience. Changing the Graphic Mode can be achieved by clicking on Tools, Options entry, 3D View Tab.

_images/impexp_kml_export_googeearth_settings_fig.png

Setting the Graphics Mode in Google Earth.

_images/impexp_kml_export_googleearth_directx_fig.png

KML/COLLADA models rendered with DirectX, highlighting surface borders are noticeable everywhere.

_images/impexp_kml_export_googleearth_opengl_fig.png

The same scene rendered in OpenGL mode.

In addition to CityGML and CityJSON, 3D city model content stored in a 3D City Database can also be exported as KML [Wils2008], COLLADA [BaFi2008], and glTF [Khro2018] on the VIS Export tab shown below for visualization and use in a broad range of applications such as Earth browsers like Google Earth, ArcGIS Explorer, and Cesium.

_images/kml_collada_gltf_export_main_gui.png

The VIS Export tab allowing for exporting visualization models from the 3DCityDB.

On the VIS Export tab, all parameters required for the export have to be entered in a similar fashion like for a CityGML/CityJSON export (see Section 4.5). Mandatory input is the output file [1], the Level of Detail (LoD) to export from as well as the display forms [2].

Output file selection

The path and filename of the main output file must be provided at the top of the dialog [1]. You can either manually enter the output file or use the Browse button to open a file selection dialog. This output file is always created as KML file that uses the KML network link mechanism to link the separate datasets containing the visualization models that are created during the export process. For most viewers and applications (e.g., Google Earth) it should be sufficient to just load this main output file. The viewer can parse and follow the links in this file to conditionally load and refresh the visualization models.

Since the format of the main output file is always KML, it is recommended to use .kml as file extension.

Level of Detail to export from

The export dialog lets you choose the LoD representation of the city objects that shall be used for creating the visualization models [2]. The corresponding drop-down list offers the five levels LoD0 - LoD4 as defined by CityGML and “highest LoD available” as additional option (default: LoD2).

For each top-level city object, the export operation will only query the geometry in the selected LoD and use this geometry as basis for the visualization export. Also child objects such as boundary surfaces of buildings are searched for geometries. If a geometry can neither be found for the top-level city object nor one of its child objects in the given LoD, the object will not be exported.

When choosing “highest LoD available” rather than a specific LoD, the process will iterate from LoD4 to LoD0 for every city object and search for a geometry representation according to the above scheme. The first geometry found will be used for the visualization export. This way, every city object will be included in the export data, unless it has no geometry at all.

Note

The geometry representation of city objects usually gets more detailed and complex with higher LoDs. Thus, the visualization gets more detailed as well, but the export may take longer to generate the models. Likewise, file sizes increase and loading the models in Earth browsers might be slower. Tiled exports help to keep the file sizes small and loading times fast.

Display forms

Besides the LoD, also the way the models shall be displayed (display form) must be specified [2]. The user can choose one or more display forms. Each display form is generated based on the geometry of the city object in the specified LoD and also determines the output format of the corresponding visualization models. The following display forms are available:

Supported display forms and output formats
Display form
What is visualized
Output format
Footprint
Objects are represented by a ground surface that is derived by projecting its geometry onto the earth surface.
KML
Extruded
Objects are represented as block models by extruding the Footprint representation to the highest point taken from the 3D bounding box.
KML
Geometry
Objects are represented with their full geometry.
KML
COLLADA/glTF
Objects are represented with their full geometry. In contrast to Geometry, this display form also supports textures.
COLLADA and/or glTF

Note

The display forms Extruded, Geometry and COLLADA/glTF are not only available when exporting from LoD0.

By default, all visualization models created from the exported city objects are organized into one visualization dataset per display form. Alternatively, you can choose to apply tiling (see below) which will result in one dataset per tile and display form. The datasets will contain the name of the display form as suffix in the filename, so that they can be easily identified and kept apart.

The following figure exemplifies the different display forms for one building. All representations have been derived from the LoD2 representation of the building.

_images/impexp_kml_export_displayforms_example.png

The same building displayed as Footprint, Extruded, Geometry, COLLADA/glTF with textures (from top left to bottom right)

If you want to create a visualization with textures or colors stored as appearances in the 3DCityDB, select the COLLADA/glTF display form and additionally pick an appearance theme from the drop-down list in [2] (default: none). Click the Query button to populate this drop-down list with the themes available in the database. If you have not established a database connection beforehand, the Importer/Exporter will automatically connect to the currently selected database entry on the Database tab to retrieve the list of appearance themes.

With the visible from input field next to each display form [2], you can control the visibility of the visualization models created for the separate display forms. When the contents of a dataset are projected onto the screen in the viewer, they must occupy an area of the screen that is greater than the number of pixels specified in visible from (in square pixels) in order to become visible. When exporting more than one display form at the same time, the visible from value of a higher-resolution display form also defines when a lower-resolution display form shall become invisible. This enables to seamlessly switch between the different display forms in a viewer when zooming in and out.

For example, assume you export your city objects as Extruded with visible from set to 50 and as Geometry with visible from set to 200. If you open the scene in a viewer and zoom into it, then the extruded models will first occupy an area of 50 pixels and thus will become visible. When you zoom in more and the screen space criteria is fulfilled for the Geometry dataset, the Geometry dataset will become visible and the Extruded models are not displayed any more. When you zoom out again, the viewer will switch back to the Extruded display.

Note

In the main KML output file, the visible from input values are mapped onto <minLodPixels> and <maxLodPixels> properties of different <Region> elements, with each <Region> representing one visualization dataset (also see [Wils2008]). The last display form in the list receives a value of -1 for <maxLodPixels> and, thus, will not become invisible when zooming in even more. A viewer must support these KML elements in order for the switching between display forms to work.

Note

Proper values for the visible from values depend on both the spatial extent and the file size of a visualization dataset. Thus, the values must always be adapted to a specific export. An example for a good combination for typical LoD2 city objects and a tile size of 250x250m could be:

  • Footprint visible from 50 pixels,
  • Extruded visible from 125 pixels,
  • Geometry or COLLADA/glTF visible from 200 pixels.

Caution

For Oracle, the Footprint and Extruded display forms are implemented based on the spatial function SDO_AGGR_UNION. This function is not included in Oracle 10g/11g Locator but requires Spatial for these database versions. Starting from Oracle 12c, the function is also included in Locator.

Export filters

The visualization export operation offers thematic and spatial filters to restrict an export to a subset of the 3D city model content stored in the database [3]. The following filters are available and discussed in separate sections of this chapter:

To enable a filter, simply select its checkbox. This will automatically make the filter dialog visible. Make sure to provide the mandatory input for the filter to work correctly. If more than one filter is enabled, the filters are combined in a logical AND operation, i.e. all filter criteria must be fulfilled for a city object to be exported. If no checkbox is enabled, no filters are applied and, thus, all features contained in the database will be exported.

Note

All export filters are only applied to top-level features but not to nested sub-features.

Tiled exports

When exporting 3D city model content to a single visualization file, the file size may quickly grow. The performance of a viewer application may significantly drop for large files if the data can be displayed at all, especially when using a web client for visualization. It is therefore strongly recommended to use tiling for visualization exports. With tiling enabled, the area to be exported is split into a regular grid of tiles, and a separate visualization dataset is created per tile and display form. Since every tile only contains a subset of all city objects to be exported, file sizes will be substantially smaller and can be loaded faster in a viewer. Please refer to Section 4.6.2 for how to enable tiling.

The exported tile files are organized in a hierarchical tree structure in the file system, where the folder names reflect the row and column numbers of the grid used in the tiling process. The tile files are the leaves of this tree and are stored within the folder hierarchy based on their row and column index in the grid. This makes accessing the individual tiles very easy. As mentioned above, there will be one tile file per display form. The name of the display form is added as suffix to the filename to be able to distinguish the files. The root folder of this tree structure is called Tiles.

The following figure shows an example of the resulting tile tree structure for a grid with 3 rows and 2 columns.

_images/impexp_kml_export_directory_hierarchy_fig.png

Example: hierarchical directory structure for export of 3x2 tiles

Note

The Tiles hierarchy is also created if tiling is disabled. In this case, a single visualization dataset is created per display form and is stored at (0, 0) in the tree.

Export preferences

More fine-grained preference settings affecting the visualization export operation are available on the Preferences tab of the operations window [5]. Make sure to check these settings before starting the export process. A full documentation of the export preferences is provided in Section 4.6.7. The following table provides a summary overview.

Summery overview of the visualization export preferences
Preference name
Description
General options affecting the visualization export as well as COLLADA/glTF specific settings.
Defines the styling to be used for the different top-level features.
Defines KML-based information balloons to be displayed when clicking on a top-level feature.
Controls elevation settings like the altitude mode and height offset for features.

Starting the export process

Once all settings are complete, the visualization export is triggered with the Export button [4] at the bottom of the dialog (cf. Fig. 4.76). If a database connection has not been established manually beforehand, the currently selected entry on the Database tab is used to connect to the 3D City Database. The separate steps of the export process as well as all errors and warnings that might occur during the export are reported to the console window. The overall progress is shown in a separate status window. This status window also offers a Cancel button to abort the export process at any time.

Output files

The export process will create several output files inside the export directory [1]:

  • The main KML file as specified in [1]. This file links all visualization datasets as described above. For most viewers it should therefore be sufficient to just open this file in order to load and visualize the entire scene.
  • The Tiles folder containing the tile files as described above. Note that every tile file might reference additional external files such as COLLADA/glTF files or texture images, which are stored relative to the location of the tile file.
  • One master JSON file per display form that contains metadata about the exported content. The name of the display form is added to the filename to be able to relate the metadata with a display form.

The following snippet shows the content of an example master JSON file. The JSON properties reflect the most relevant export settings.

{
   "version": "1.0.0",
   "layername": "NYC_Buildings",
   "fileextension": ".kmz",
   "displayform": "extruded",
   "minLodPixels": 140,
   "maxLodPixels": -1,
   "colnum": 29,
   "rownum": 23,
   "bbox": {
      "xmin": -74.0209007,
      "xmax": -73.9707756,
      "ymin": 40.6996416,
      "ymax": 40.7295678
   }
}

Applications can derive more information based on the provided metadata. For example, the length and width (in WGS84) of each tile can be determined using the following formulas:

\[TileWidth = \frac{bbox.xmax – bbox.xmin}{column}\]
\[TileLength = \frac{bbox.ymax – bbox.ymin}{row}\]

Based on these values, applications are also able to use the following formulas to easily retrieve the row and column index of the tile in which a given point lies:

\[ColumnIndex = floor( \frac{X – bbox.xmin}{TileWidth} )\]
\[RowIndex = floor( \frac{Y – bbox.ymin}{TileLength} )\]

where X and Y denote the WGS84 coordinates of the given point.

For a bounding box interactively drawn in the scene, all intersecting tiles can also easily be identified. First, the row and column indexes for the lower left and upper right corner points of the bounding box have to calculated as shown above. Assume that the result is given as (R1, C1) and (R2, C2) respectively. A tile intersects with this bounding box if its RowIndex and ColumnIndex fulfills the following condition:

\[(R1 \leq RowIndex \leq R2) \land (C1 \leq ColumnIndex \leq C2)\]

Preferences

Plugins

Plugins extend the core functionality of the Importer/Exporter and can be developed by everyone based on a well-defined plugin API (see Section 5.1). This preferences dialog provides an overview of all plugins that are installed for the Importer/Exporter and lets you enable or disable them. The dialog is only available if at least one plugin has been installed.

_images/impexp_preferences_plugins_fig.png

Plugins overview and management.

The installed plugins are presented in a list view. Simply browse through this list to display more information about each plugin. This information typically comprises the name of the plugin, its version and a brief description of its functionality. Plugin vendors may provide additional links to online resources such as the plugin homepage or a user manual. The technical details section contains implementation details of the plugin, which are useful in case you encounter issues and need support from the plugin maintainer.

A plugin is enabled or disabled by ticking the checkbox in the list entry. Disabling unnecessary plugins can increase the performance of Importer/Exporter operations affected by the plugins.

Note

You cannot install or remove plugins using this dialog. Plugins are rather installed by copying the plugin files into the plugins subfolder within the installation directory of the Importer/Exporter and uninstalled by simply deleting these files again. Both actions require a restart of the Importer/Exporter. Go to Section 5.1 for more information.

Reference systems

When setting up an instance of the 3D City Database, a coordinate reference system (CRS) must be chosen for the entire database (cf. Section 1.3). This CRS is used as default reference system for all spatial objects that are created and stored in the database instance (expect implicit geometries) as well as for building spatial indexes and performing spatial functions.

At many places, the Importer/Exporter allows for providing coordinate values associated with a different CRS though, e.g. when defining spatial bounding box filters for the different import and export operations or when defining a target CRS into which coordinate values shall be converted during CityGML/CityJSON exports. To add and manage additional reference systems, the Importer/Exporter provides the Reference systems settings below the Database preferences node.

_images/impexp_database_preferences_crs_fig.png

Database preferences – Reference systems.

On top of the dialog [1], a drop-down list allows for choosing a CRS for display and editing from the list of user-defined CRSs. This list contains at minimum one predefined entry called Same as in database that represents the internal CRS of the 3D City Database instance. This entry will always show the SRID and gml:srsName of the currently connected database instance.

Note

The internal CRS cannot be edited in this dialog. If you want to change the internal CRS, then follow the steps in Section 4.3.4.

A new user-defined CRS can be added to this list by clicking the New button. Please provide the database-specific SRID in the corresponding SRID input field of the user dialog and enter the URN encoding of the CRS into the gml:srsName input field (optional). This field also provides a drop-down list of commonly used encoding schemes which can be used as template (such as the OGC encoding scheme). A short, meaningful textual description of the CRS must be provided in the Description field. This description is used as value for the drop-down list at top of the dialog, but also for similar CRS drop-down lists on further tabs of the Importer/Exporter. The new CRS is added to the list of user-defined CRSs upon clicking the Apply button. The following screenshot provides an example.

_images/impexp_database_preferences_add_crs_fig.png

Adding a new CRS to the list of user-defined CRSs.

The Copy button allows for adding further CRSs by copying and editing the information of an already existing one. The currently selected CRS is deleted from the list by clicking the Delete button. The Check button next to the SRID input field facilitates to verify whether the provided SRID is supported by the currently connected 3D City Database instance. After a successful check, the non-editable fields Database name and SRS type will be filled with the corresponding information queried from the 3D City Database instance. If the Importer/Exporter is not connected to a database instance, the Check button is disabled.

The result of the SRID verification may vary between different 3D City Database instances since 1) the list of predefined spatial reference systems differs between different database systems and versions, and 2) both PostgreSQL/PostGIS and Oracle support the definition of user-defined spatial reference systems on the database side (please check the respective database documentation for guidance).

Note

In order to add a user-defined CRS to the Importer/Exporter that is not supported by the underlying PostgreSQL/PostGIS or Oracle database, you need to first register this CRS in your database. As soon as the CRS is available from the database, it can be added to the list of user-defined CRSs in the Importer/Exporter.

Note

You can also create multiple CRS definitions for the same database SRID that only differ in their gml:srsName values. This way, you can easily use different gml:srsName values for different CityGML exports, for example, in case specific target applications require a certain but different syntax for the gml:srsName.

The list of user-defined CRSs is automatically stored in the config file of the Importer/Exporter and loaded upon application start. It can additionally be exported into an extra file (see [2] in Fig. 4.80). This allows for easily sharing user-defined CRSs between different installations of the Importer/Exporter. Please provide a valid filename in the corresponding input field Filename (use the Browse button to open a file selection dialog) and click on Save.

There are two options for importing an external list of CRSs:

  1. Add: the CRSs listed in the external file are added to the current list of CRSs.
  2. Replace: the current list of CRSs is replaced with the entries from the external file.

The Importer/Exporter is shipped with a number of predefined CRSs organized in subfolders below templates/CoordinateReferenceSystems in the installation folder. Each CRS definition is stored in its own file and, thus, can be easily imported and added to the list of user-defined CRSs. Note that the gml:srsName of the predefined CRSs generally lacks a height reference system. It should therefore be added before using this CRS as target reference system for CityGML exports.

Cache

Both during CityGML imports at exports, the Importer/Exporter has to keep track of various temporary information. For instance, when resolving XLinks, the identifiers as well as additional information about the features and geometries must be available. Since the Importer/Exporter is designed to be able to process arbitrarily large CityGML input files, keeping this information in main memory only is not a promising strategy. For this reason, the information is written to temporary tables in the database as soon as user-defined memory limits are reached.

_images/impexp_preferences_general_cache_fig.png

General preferences – Cache.

By default, temporary tables are created in the 3D City Database instance itself. The tables are populated during the import and export operation and are automatically dropped after the operation has finished. Alternatively, the user can choose to store the temporary information in the local file system instead. An absolute path where to create the file-based storage has to be provided. Either type the location manually into the input field or use the Browse button to open a file selection dialog. The file-based storage is also automatically removed after the operation has finished.

Some reasons for using a file-based storage are:

  • The 3D City Database instance is kept clean from additional tables holding temporary process information.
  • If the Importer/Exporter runs on a different machine than the 3D City Database instance, sending temporary information over the network might be slow. In such cases, using a local storage might help to increase performance, especially if fast disk drives are used.

Import and export path

This preference dialog allows for setting a default path for import and export operations.

_images/impexp_preferences_general_path_fig.png

General preferences – Import and export path.

Simply choose between the last used import/export path (default) or browse for a specific folder in your local file system. The selected folder will then be used as default path in all dialogs that require an input/output file.

Network proxies

Some of the functionalities offered by the Importer/Exporter require internet access. This applies, for instance, to the XML validation when accessing XML Schema documents on the web, to the map window for the graphical selection of bounding boxes, or to the automatic calculation of height offsets during visualization exports based on the Google Elevation API web service.

Most computers in corporate environments have no direct internet access but must use a proxy server. The preference dialog shown below lets you configure network proxies.

_images/impexp_preferences_general_network_proxies_fig.png

General preferences – Network proxies.

The Importer/Exporter supports Web (HTTP), Secure web (HTTPS) and SOCKS proxies. To provide proxy information for one of the protocols, simply select the corresponding entry from the list and enter the proxy settings in the input fields below: Server, Port, and, if the proxy requires login credentials, also Username and Password. Default port values for each protocol are automatically chosen (HTTP: 80; HTTPS: 443; SOCKS: 1080) and only need to be changed if required.

It is also possible to define one single proxy for all protocols by simply selecting the corresponding checkbox under the protocol list. Just make sure the proxy server supports all protocols and that they can all be routed through the given port. Contact your IT administrator in case you are unsure.

Proxies are only used if the checkbox next to the protocol type is enabled. Otherwise, the proxy configuration will be stored but remains inactive. When the proxy for a given protocol is enabled, every outgoing connection by the Importer/Exporter that uses the protocol will be routed through this proxy.

In case the computer running the Importer/Exporter is directly connected to the internet, no proxies need to be configured.

API Keys

The Importer/Exporter uses external web services offered by third party providers for different tasks and functionalities. Some of these services are open and free to use, whereas others are more restrictive and require passing an API key to use the service. With the API Keys preference dialog, you can provide your API keys for different services.

_images/impexp_preferences_general_apikeys_fig.png

General preferences – API keys.

The Google Maps API services can be used by the Importer/Exporter for two different tasks: 1) the Geocoding API is used for geocoding addresses and address lookups in the map window (cf. Section 4.8), and 2) the Maps Elevation API is used in visualization exports for retrieving height values from the Google Earth terrain model (cf. Section 4.6.7.4). If you want to use one of these services, you must enter your corresponding API key in the above dialog. Otherwise the services will respond with an error message that will be displayed by the Importer/Exporter. Please visit the Google Maps API website if you do not have an API key yet.

Note

Google has changed the usage and pricing policies for the above-mentioned services starting from July 16, 2018. Thus, in previous versions of the Importer/Exporter, the services could be used without entering an API key.

Logging

The Importer/Exporter logs information about events such as activities or failures, for instance, during database imports and exports. Each log entry consists of a timestamp when the event occurred, a log level indicating the severity of the event and a human-readable message text. Log messages are always printed to the console window and may additionally be forwarded to a log file on your local computer. The Logging preference dialog is shown below.

_images/impexp_preferences_general_logging_fig.png

General preferences – Logging.

The following four log levels are distinguished (from highest to lowest severity):

Log levels and their meaning.
Log level
Description
ERROR
An error has occurred (usually an exception). This comprises internal and unexpected failures. Moreover, invalid content of CityGML/CityJSON input files is reported via this log level. Fatal errors will cause the operation in progress to abort.
WARN
An unusual condition has been detected. The operation in progress continues to work but the user should check the warning and take appropriate actions.
INFO
An interesting piece of information about the current operation that helps to give context to the log, often when processes are starting or stopping.
DEBUG
Additional messages reporting the internal state of the application.

The log level for messages in the console window can be chosen from a drop-down list in the Console dialog [1]. The log will include all events of the indicated severity as well as events of greater severity (default: INFO). Word wrapping can be optionally enabled for long message texts that otherwise exceed the width of the console window. In addition, the color scheme for console log messages can be customized by assigning text colors to each log level.

Note

The log output in the console window is truncated after 10,000 log messages in order to prevent the UI from getting unresponsive.

If log messages shall additionally be stored in a log file, simply activate the option Write log messages to log file. The log file is named impexp_{date}.log by default, with the {date} token being replaced with the current date at program start. By default, the Importer/Exporter appends log messages to the end of the file in case the log file already exists. To change this behaviour, enable the Truncate log file at program start option which will clear the log file with every start of the Importer/Exporter.

Log files are by default stored in the home directory of the operating system user running the Importer/Exporter. Precisely, you will find the log files in the subfolder 3dcitydb/importer-exporter/log. The location of the home directory differs for different operating systems. Using environment variables, the location can be identified dynamically:

  • %HOMEDRIVE%%HOMEPATH%\3dcitydb\importer-exporter\log (Windows 7 and higher)
  • $HOME/3dcitydb/importer-exporter/log (UNIX/Linux, Mac OS families)

You can also choose a different directory or even a different target file through the option Use alternative log file [2]. Either enter the directory or log file manually or click on Browse to open a file selection dialog. The log level can be chosen independently from the console window through the corresponding drop-down list [2] (default: INFO).

Language selection

The Importer/Exporter user interface has support for different languages. Use the Language selection preference dialog shown below to pick your favourite language.

_images/impexp_preferences_general_language_fig.png

General preferences – Language selection.

In addition to the settings on the Import, Export, VIS Export and Database tabs, more preferences affecting the separate operations of the Importer/Exporter as well as general application settings are available on the Preferences tab shown below.

_images/preferences_main_gui.png

The preferences dialog.

The preferences are organized into separate topics using a tree view [1] on the left side of the dialog. Simply navigate through this tree view by selecting, expanding and collapsing the separate tree nodes. When selecting a node, the associated settings dialog is displayed on the right side [2]. Changes made to the settings of the selected node are applied through the Apply button [3]. The buttons Restore and Default allow for resetting the preferences to their previous state or to their default values.

The preference settings offered by the Import, Export, and VIS Export nodes are not discussed in this chapter but in the context of the corresponding operations. Use the links below to directly jump to these chapters.

This chapter focuses on the Database and General preferences. The following table provides a summary overview of the available settings.

Summery overview of the database and general preferences
Preference name
Description
Overview and management of installed plugins.
Management of user-defined coordinate reference systems.
Defines cache settings for storing temporary information during import and export operations.
Default paths for import and export operations.
Allows for defining network proxies for different network protocols.
Lets you manage your API keys for different web services.
Controls the logging of messages to the console and to files.
Defines the language of the Importer/Exporter user interface.

All preferences (including the settings on the separate operation tabs) are automatically stored in the configuration file of the Importer/Exporter and are restored from this file upon program start. Using the File menu [4] available from the menu bar of the Importer/Exporter, the preferences can optionally be stored in or loaded from user-defined configuration files (cf. Section 4.2).

Note

In case plugins have been registered with the Importer/Exporter, each plugin might add another node to the preferences tree. Please refer to the manual of the plugin for a documentation of the preference settings.

Map window

The Importer/Exporter offers a 2D map window that allows a user to display the overall bounding box calculated from the city model content stored in a 3D City Database instance and to graphically select a bounding box for data imports and exports.

There are two ways to open the map window:

  1. Choose the entry View -> Open Map Window from the menu bar at the top of the application window.

    _images/impexp_map_window_menu_fig.png
  2. Click the map button map_select on the bounding box dialog that is available for the different import and export operations.

    _images/impexp_map_window_component_fig.png

The 2D map is rendered in a separate application window shown below.

_images/impexp_map_window_layout_fig.png

2D map window for bounding box selections.

The map content is provided by the OpenStreetMap (OSM) service and is subject to the OSM usage and license terms. Make sure your computer has internet access to load the map. This might require setting up network proxies (see Section 4.7.5). Please consult your network administrator.

The map offers default mouse controls for panning and zooming. For convenience, a geocoding service is included in the map window [1]. Simply type in an address or a geo location (given by geographic lat/lon coordinates separated by a comma) and click the search button map_search. The map will automatically zoom to the first match. Further matches are available from a drop-down list [1]. The geocoding service uses the free OSM Nominatim service by default. You can pick the Goolge Geocoding API as alternative service from the drop-down list in [5]. Note that the Goolge Geocoding API is not free but requires an API key that must be entered in the global preferences of the Importer/Exporter (cf. Section 4.7.6). Otherwise, the service will respond with an error message. Independent of the service you choose, make sure that you adhere to its terms of use.

To display the result of the geocoding query on Google Maps in your default internet browser, simply click the Show in Google Maps button [6].

A list of usage hints is available at the right top of the map window [7]. Please click on the Show usage hints link to display this list. The map controls are also described in the following.

  • Select bounding box: Move the mouse while pressing the ALT key and the left mouse button to select a bounding box. Once the left mouse button is released, the coordinates of the bounding box are automatically filled in the Bounding box dialog on left of the map [3]. If you have opened the map window from a bounding box filter dialog, then clicking the Apply button on the upper right corner of the window [2] closes the map window and carries the bounding box values to the filter dialog. In addition, the values are copied to the clipboard.
  • Lookup address: Right-click on the map to bring up a context menu for the geo location at the mouse pointer. From the context menu, choose Lookup address here. This will trigger a reverse geocoding query using the geocoding service selected in [5]. The resulting address will be displayed on the left of the window [4]. The waypoint icon denotes which location on the map is associated with the address, whereas the waypoint_reverse icon shows where you clicked on the map (see Fig. 4.92).
  • Zoom in/out: Use the mouse wheel or the context menu (right-click).
  • Zoom into selected area: Move the mouse while pressing the SHIFT key and the left mouse button to select an area. Once the left mouse button is released, the map zooms into the selected area. If the maximum zoom level is reached this action has no further effect.
  • Move map: Keep the left mouse button pressed to move the map.
  • Center map and zoom in: Double click the left mouse button to center the map at that position and to increase the current zoom level by one step.
  • Use popup menu for further actions: Right-click on the map to bring up a context menu offering additional functions such as Zoom in, Zoom out, Center map here and Lookup address here (see above). The Get map bounds function is equivalent to selecting the visible map content as bounding box, and the map bounds are transferred to the Bounding box dialog on the left [3].

To close the map, simply click the Cancel button in the upper right corner [2].

_images/impexp_map_window_address_lookup_fig.png

Address lookup in the map window.

The coordinates in the map window and of the selected bounding box are always given in WGS 84 regardless of the coordinate reference system of the 3D City Database instance.

When opening the map window from a bounding box dialog that already contains coordinate values, the map window will automatically display this bounding box. If the coordinate values of the provided bounding box are not in WGS 84, a transformation to WGS 84 is required. Since the Importer/Exporter uses functionality of the underlying spatial database system for coordinate transformations, a connection to the database must have been established beforehand. In case there is no active database connection, the following pop-up window asks the user for permission to connect to the database.

_images/impexp_map_window_coordinate_transformation_fig.png

Asking for permission before connecting to a database for coordinate transformation.

The Apply button on the upper right corner of the map window [2] is a shortcut for copying the coordinate values to the clipboard and pasting them in the bounding box fields of the calling tab on the operations window. Furthermore, coordinate values can now be easily copied from one tab to another by simply clicking on the copy button bbox_copy in one of them, say Import tab, with filled bounding box values, changing to another, say VIS Export tab and clicking on the bbox_paste button there. Previously existing values in the bounding box fields of the VIS Export tab (if any) will be overwritten.

Using the command-line interface (CLI)

Help command

Synopsis

impexp help [-h] [--ade-extensions=<folder>] [-c=<file>]
            [--log-file=<file>] [--log-level=<level>]
            [--pid-file=<file>] [--plugins=<folder>]
            [--use-plugin=<plugin[=true|false]>[,<plugin[=true|false]
            >...]]... [@<filename>...] [COMMAND...]

Description

The help command is used to display the usage help for the specified command. When no command is given, the usage help for the main command is shown instead. Alternatively, you can invoke the command with the --help flag to get its usage help.

Options

COMMAND...

The name of the COMMAND for which the usage help shall be displayed.

Examples

$ impexp help import

Print the help information about the import command.

Import command

Synopsis

impexp import [-hV] [--[no-]fail-fast] [--ade-extensions=<folder>]
              [-c=<file>] [--import-log=<file>]
              [--input-encoding=<encoding>] [--log-file=<file>]
              [--log-level=<level>] [--pid-file=<file>]
              [--plugins=<folder>] [--use-plugin=<plugin[=true|false]>[,
              <plugin[=true|false]>...]]... [--worker-threads=<threads[,
              max]>] [[--creation-date=<mode>]
              [--termination-date=<mode>] [--lineage=<lineage>]
              [--updating-person=<name>] [--reason-for-update=<reason>]
              [--use-metadata-from-file]] [[[-t=<[prefix:]name>[,<
              [prefix:]name>...]]... [--namespace=<prefix=name>[,
              <prefix=name>...]]...] [-i=<id>[,<id>...] [-i=<id>[,
              <id>...]]...] [-b=<minx,miny,maxx,maxy[,srid]>
              [--bbox-mode=<mode>]] [[--count=<count>]
              [--start-index=<index>]] [[--no-appearance]]] [-f=<file>
              [-m=<mode>] [-w] [[-n=<name>] [-I=<index>] [--[no-]header]
              [-D=<string>] [-Q=<char>] [--quote-escape=<char>]
              [-M=<char>] [--csv-encoding=<encoding>]]] [[-T=<database>]
              [-H=<host>] [-P=<port>] [-d=<name>] [-S=<schema>]
              [-u=<name>] [-p[=<password>]]] [@<filename>...] <file>...

Description

The import command imports one or more CityGML or CityJSON files into the 3D City Database. It corresponds to the import operation offered on the Import tab of the graphical user interface (see Section 4.4). The command provides a range of options to adapt the import process. If you miss settings available for the GUI version of this command, you can use the --config option to pass a config file containing these settings.

General options

<file>...

One or more input files or directories to be imported. This parameter is mandatory. Glob patterns are supported to specify a set of filenames using wildcard characters (e.g., /path/to/*.gml). If a directory is provided, it will be recursively scanned for CityGML/CityJSON files. The supported input file formats and extensions are identical with those listed in Table 4.7.

--input-encoding=<encoding>

Specify the encoding of the input file(s). For XML documents, the encoding is automatically parsed from the XML declaration at the beginning of the dataset if present. Provide an official IANA-based character encoding name. The default value is UTF-8.

--import-log=<file>

If you want an import log to be created for all top-level features loaded in the database, provide the path to the import log file with this option. Note that log file will be truncated in case it already exists. More information about the import log can be found in Section 4.4.6.11.

--[no-]fail-fast

Flag to indicate whether to fail fast on errors and to immediately cancel the import process (default: true).

--worker-threads=<threads[,max]>

Number of parallel threads to use for the import process. You can also specify a minimum and maximum number of threads separated by commas. A general description of the multithreaded processing used for the import process is provided in Section 4.4.6.12.

Metadata options

The import command allows for specifying metadata that is assigned to every city object during import and stored in columns of the table CITYOBJECT.

--creation-date=<mode>

Specifies how to set the creationDate attribute for city objects. Allowed values are replace (default), complement and inherit. If the creation date is not available from the city object during import, it may either be set to the import date (complement) or be inherited from the parent object, if available (inherit). Alternatively, the user can choose to replace all creation dates from the input files with the import date (replace).

--termination-date=<mode>

Specifies how to set the terminationDate attribute for city objects. Allowed values are replace (default), complement and inherit. If the termination date is not available from the city object during import, it may either be set to NULL (complement) or be inherited from the parent object, if available (inherit). Alternatively, the user can choose to replace all termination dates in the input files with NULL (replace).

--lineage=<lineage>

Value to store as lineage of the city objects.

--updating-person=<name>

Name of the user responsible for the import. By default, the name of the database user is chosen.

--reason-for-update=<reason>

Reason for importing the data.

--use-metadata-from-file

Flag to indicate that values for lineage, updating person and reason for update stored for the city objects in the input file should take precedence over the values defined on the command line.

Import filter options

The import command offers additional options to define both thematic and spatial filters that are used to narrow down the set of top-level city objects to be imported from the input file(s).

-t, --type-name=<[prefix:]name>[,<[prefix:]name>...]

Comma-separated list of one or more names of the top-level feature types to be imported. The type names are case sensitive and shall match one of the official CityGML feature type names or a feature type name from a CityGML ADE. To avoid ambiguities, you can use an optional prefix for each name. The prefix must be associated with the official XML namespace of the feature type. You can either use the official CityGML namespace prefixes listed in Table 4.11. Or you can use the --namespace option to declare your own prefixes.

--namespace=<prefix=name>[,<prefix=name>...]

Used to specify namespaces and their prefixes as comma-separated list of one or more prefix=name pairs. The prefixes can be used in other options such as --type-name.

-i, --resource-id=<id>[,<id>...]

Comma-separated list of one or more identifiers. Only top-level features having a matching value for their identifier attribute will be imported.

-b, --bbox=<minx,miny,maxx,maxy[,srid]>

2D bounding box to use as spatial filter. The bounding box is given by four coordinates that define its lower left and upper right corner. By default, the coordinates are assumed to be in the same CRS that is used by the 3DCityDB instance. Alternatively, you can provide the database srid of the CRS associated with the coordinates as fifth value (e.g. 4326 for WGS84). All values must be separated by commas. The bounding box is evaluated against the gml:boundedBy property (CityGML) respectively the “geographicalExtent” property (CityJSON) of the input features.

--bbox-mode=<mode>

Choose the mode of the bounding box filter. Allowed values are overlaps (default) and within. When set to overlaps, all features overlapping with the bounding box are imported. Otherwise, features must be within the given bounding box. Can only be used together with the --bbox option.

--count=<count>

Maximum number of top-level features to be imported.

--start-index=<index>

Index within the set of all top-level features over all input file(s) from which the import shall begin. The start index uses zero-based numbering. Thus, the first top-level feature is assigned the index 0, rather than the index 1.

--no-appearance

Flag to indicate that appearances of the features shall not be imported.

Import list options

You can also pass an import list to the import command to control which city objects should be imported or skipped during import. Please refer to Section 4.4.2 for a description of the import list filter. The following options let you define the layout and reserved characters of the import list.

-f, --import-list=<file>

Specify the path to the import list file to use in the import operation.

-m, --import-list-mode=<mode>

Specify the mode of the import list filter. Allowed values are import and skip. When choosing import, only city objects having an identifier that is contained in the import list will be imported. In skip mode, matching city objects will not be imported. The default mode is import.

-w, --import-list-preview

Use this option to get a preview of the first few lines of the import list when applying the provided options for parsing and interpreting the import list. This preview is very helpful to adapt and specify the delimiter character(s), quoting rules, header information, identifier column name or index, etc. The preview is printed to the console.

-n, --id-column-name=<name>

Name of the column that holds the identifier value. Using this option only makes sense if the import list contains a header line. Otherwise, use the --id-column-index option to specify the column index.

-I, --id-column-index=<index>

Index of the column that holds the identifier value. The first column of a row has the index 1. If this option is omitted, the value of the first column of each row will be used as identifier by default. This option is mutually exclusive with the --id-column-name option (only specify one).

--[no-]header

Define whether or not the import list uses a header line. By default, the import operation assumes that the first line contains header information.

-D, --delimiter=<string>

Specify the delimiter used for separating values in the import list. By default, a comma , is assumed as delimiter. The provided delimiter may consist of more than one character.

-Q, --quote=<char>

The values in the import list may be quoted (i.e., enclosed by a reserved character). This option lets you define the character used as quote (default: "). Only single characters are allowed as value.

--quote-escape=<char>

If the import list contains quoted values, define the character used for escaping embedded quote characters. The default escape character is ". Only single characters are allowed as value.

-M, --comment-marker=<char>

Specify the character used as comment marker in the import list. Any line starting with this comment marker is interpreted as comment and, thus, will be ignored. The default comment marker is #. Only single characters are allowed as value.

--csv-encoding=<encoding>

Define the encoding of the import list using a IANA-based character encoding name. UTF-8 is assumed as default encoding.

Database connection options

The following options allow you to define the connection details that shall be used for establishing a connection to the 3D City Database. You can also use environment variables for this purpose (see Section 4.9.8).

-T, --db-type=<database>

Specify the database system used for running the 3DCityDB. Allowed values are postgresql for PostgreSQL/PostGIS databases (default), and oracle for Oracle Spatial/Locator databases.

-H, --db-host=<host>

Specify the host name of the machine on which the 3DCityDB database server is running.

-P, --db-port=<port>

Specify the TCP port on which the 3DCityDB database server is listening for connections. The default value is 5432 for PostgreSQL and 1521 for Oracle.

-d, --db-name=<name>

Specify the name of the 3DCityDB database to connect to. When connecting to an Oracle database, provide the database SID or service name as value.

-S, --db-schema=<schema>

Name of the database schema to use when connecting to the 3DCityDB. If not provided, the citydb schema is used for PostgreSQL by default, whereas the schema of the user specified by the option --db-username is used under Oracle.

-u, --db-username=<name>

Connect to the 3DCityDB database server as the user given by name.

-p, --db-password[=<password>]

Specify the password to use when connecting to the 3DCityDB database server. You can either provide the password as value for this option or leave the value empty to be prompted to enter the password on the console before connecting to the database. If you skip this option completely, the impexp tool will try to connect to the database without a password. If the database server requires password authentication and a password is not available by other means, the connection attempt will fail in this case.

Examples

$ impexp import -H localhost -d citydb_v4 -u citydb_user -p my_password my_city.gml

Import the dataset my_city.gml into the 3DCityDB. The 3DCityDB is supposed to be running on a PostgreSQL database on the same machine. The connection will be established to the citydb_v4 database with the user citydb_user and the password my_password.

$ impexp import -H localhost -d citydb_v4 -p -u citydb_user \
                -t Building,CityFurniture /path/to/**/*.json

The folder /path/to is recursively scanned for files with the extension .json. From these files, only Building and CityFurniture features will be imported. Note that the password option -p is provided without a value and, thus, the user will be prompted for the password.

$ impexp import -H localhost -d citydb_v4 -p -u citydb_user \
                -b 13.3508824,52.4799281,13.3578297,52.4862805,4326 \
                --bbox-mode=within my_city.gml

Import all city objects from the my_city.gml file that are within the provided bounding box. The coordinates of the bounding box are given in WGS84. For this reason, the fifth value 4326 of the -b option denotes the SRID that is used by the target database for the WGS84 reference system.

$ impexp import -T oracle -H localhost -S other_user -d citydb_v4 -p -u citydb_user \
                --count=10 --start-index=20 /my/city/model/files/

Recursively scan the folder /my/city/model/files/ for all supported input files and import them into a 3DCityDB running on Oracle. Only 10 features will be imported starting from the 20th feature in the set. Note that the connection is established with the user citydb_user but the data will be imported into the schema of the user other_user. The citydb_user must therefore have been granted sufficient privileges.

$ impexp import -H localhost -d citydb_v4 -p -u citydb_user \
                -f imported-features.log -m skip -I 2 \
                /my/city/model/files/

Use an import list filter to skip all city objects from the input files in /my/city/model/files/ whose identifier matches a value from the import list imported-features.log. The command uses default values for parsing and interpreting the import list except for the index of the column that holds the identifier values, which is set to 2.

The above command has been chosen deliberately to illustrate how you can resume an import operation that was aborted or failed due to errors. Of course, you must have enabled import logs for this to work. The import log will contain the identifiers of those city objects that were successfully imported before the operation failed. Thus, by using the filter mode skip they will be skipped when re-running the import operation with the above command.

Export command

Synopsis

impexp export [-hV] [--[no-]fail-fast] [--ade-extensions=<folder>]
              [-c=<file>] [--compressed-format=<format>]
              [--log-file=<file>] [--log-level=<level>] -o=<file>
              [--output-encoding=<encoding>] [--pid-file=<file>]
              [--plugins=<folder>] [--use-plugin=<plugin[=true|false]>[,
              <plugin[=true|false]>...]]... [--worker-threads=<threads[,
              max]>] [[[-t=<[prefix:]name>[,<[prefix:]name>...]]...
              [--namespace=<prefix=name>[,<prefix=name>...]]...]
              [[-r=<version>] [-R=<timestamp[,timestamp]>]] [-i=<id>[,
              <id>...] [-i=<id>[,<id>...]]...] [--db-id=<id>[,<id>...]
              [--db-id=<id>[,<id>...]]...] [-b=<minx,miny,maxx,maxy[,
              srid]> [--bbox-mode=<mode>] [-g=<rows,columns>]]
              [[--count=<count>] [--start-index=<index>]] [-l=<0..4>[,
              <0..4>...] [-l=<0..4>[,<0..4>...]]... [--lod-mode=<mode>]
              [--lod-search-depth=<0..n|all>]] [[--no-appearance] |
              -a=<theme>[,<theme>...] [-a=<theme>[,<theme>...]]...]
              [-s=<select>] [-q=<xml>]] [[-T=<database>] [-H=<host>]
              [-P=<port>] [-d=<name>] [-S=<schema>] [-u=<name>] [-p
              [=<password>]]] [@<filename>...]

Description

The export command exports top-level city objects from the 3D City Database in CityGML or CityJSON format. It corresponds to the export operation offered on the Export tab of the graphical user interface (see Section 4.5). The command provides a range of options to adapt the export process. If you miss settings available for the GUI version of this command, you can use the --config option to pass a config file containing these settings.

General options

-o, --output=<file>

Specify the output file to use for storing the exported city objects. By default, CityGML is used as output format. The output format can be changed to CityJSON by choosing .json or .cityjson as file extension. Moreover, compressed exports using GZIP (.gz, .gzip) or ZIP (*.zip) are possible. The supported formats and file extensions are also listed in Table 4.17.

--output-encoding=<encoding>

Define the encoding of the output file using a IANA-based character encoding name. The default value is UTF-8.

--compressed-format=<format>

For compressed exports using GZIP or ZIP, specify the output format to use for encoding the data. Allowed values are citygml and cityjson.

--[no-]fail-fast

Flag to indicate whether to fail fast on errors and to immediately cancel the export process (default: true).

--worker-threads=<threads[,max]>

Number of parallel threads to use for the export process. You can also specify a minimum and maximum number of threads separated by commas. A general description of the multithreaded processing used for the export process is provided in Section 4.5.9.11.

Query and filter options

The export command offers additional options to define both thematic and spatial filters that are used to restrict the export to a subset of the top-level city objects stored in the 3D City Database.

-t, --type-name=<[prefix:]name>[,<[prefix:]name>...]

Comma-separated list of one or more names of the top-level feature types to be exported. The type names are case sensitive and shall match one of the official CityGML feature type names or a feature type name from a CityGML ADE. To avoid ambiguities, you can use an optional prefix for each name. The prefix must be associated with the official XML namespace of the feature type. You can either use the official CityGML namespace prefixes listed in Table 4.11. Or you can use the --namespace option to declare your own prefixes.

--namespace=<prefix=name>[,<prefix=name>...]

Used to specify namespaces and their prefixes as comma-separated list of one or more prefix=name pairs. The prefixes can be used in other options such as --type-name.

-r, --feature-version=<version>

Specify the version of the top-level features to use for the export. Allowed values are latest, at, between, terminated, terminated_at and all. When choosing latest, only those features that have not been terminated in the database are exported, whereas all will export all features. You can also choose to export only features that were valid at a given timestamp using at or for a given time range using between. Likewise, terminated will return all terminated features whereas terminated_at will select features that were terminated at a given timestamp. In all cases, timestamps must be provided using the --feature-version-timestamp option. Further details about the feature version filter are available in Section 4.5.1.

-R, --feature-version-timestamp=<timestamp[,timestamp]>

One or two timestamps to be used with the --feature-version option. A timestamp can be given as date in the form YYYY-MM-DD or as date-time specified as YYYY-MM-DDThh:mm:ss[(+|-)hh:mm. The date-time format supports an optional UTC offset. Use one timestamp with the at and terminated_at values and two timestamps separated by comma with the between value of the --feature-version option.

-i, --resource-id=<id>[,<id>...]

Comma-separated list of one or more identifiers. Only top-level features having a matching value for their identifier attribute will be exported.

--db-id=<id>[,<id>...]

Comma-separated list of one or more database IDs. Only top-level features having a matching primary key for the ID column of the CITYOBJECT table will be exported.

-b, --bbox=<minx,miny,maxx,maxy[,srid]>

2D bounding box to use as spatial filter. The bounding box is given by four coordinates that define its lower left and upper right corner. By default, the coordinates are assumed to be in the same CRS that is used by the 3DCityDB instance. Alternatively, you can provide the database srid of the CRS associated with the coordinates as fifth value (e.g. 4326 for WGS84). All values must be separated by commas. The bounding box is evaluated against the GMLID column of the CITYOBJECT table.

--bbox-mode=<mode>

Choose the mode of the bounding box filter. Allowed values are overlaps (default) and within. When set to overlaps, all features overlapping with the bounding box are exported. Otherwise, features must be within the given bounding box. Can only be used together with the --bbox option.

-g, --bbox-tiling=<rows,columns>

Use this option to enable tiling for the export. The bounding box given by the --bbox option is split into a regular grid having the specified number of rows and columns. Each grid cell is exported as separate tile. The values for rows and columns must be separated by a comma. More information about tiled exports is provided in Section 4.5.

--count=<count>

Maximum number of top-level features to be exported.

--start-index=<index>

Index within the result set of all top-level features from which the export shall begin. The start index uses zero-based numbering. Thus, the first top-level feature is assigned the index 0, rather than the index 1.

-l, --lod=<0..4>[,<0..4>...]

Comma-separated list of LoDs that shall be exported for the city objects. Each LoD value must be given as integer between 0 and 4. If you provide more than one LoD, use the --lod-mode option to specify how the values shall be evaluated. LoD representations of a city object that are stored in the database but not listed with this option are not exported. See Section 4.5.4 for more details.

--lod-mode=<mode>

Specify the LoD filter mode in case you have provided more than one LoD value for the --lod option. Allowed values are or (default), and, minimum, and maximum. When choosing or, a city object must have a spatial representation in at least one of the given LoDs to be exported, whereas and requires a representation in all LoDs. Both minimum and maximum are special versions of or where only the lowest or highest LoD representation from the matching ones is exported.

--lod-search-depth=<0..n|all>

Number of levels of nested city objects that shall be considered by the LoD filter when searching for a matching LoD representation (see also Section 4.5.4). Either provide a non-negative integer or all as value. The default value is 1.

--no-appearance

Flag to indicate that appearances of the features shall not be exported.

-a, --appearance-theme=<theme>[,<theme>...]

Comma-separated list of appearance themes. Only appearances having a matching theme will be exported. Use none as value to address appearance features lacking a theme attribute.

-s, --sql-select=<select>

Provide an SQL SELECT statement to be used as SQL filter when querying the database. In general, any SELECT statement can be used as long as it returns a list of database IDs of the selected city objects (see Section 4.5.3 for more information). You can also use an @-file to provide the SELECT statement (see Section 4.9.9.3).

-q, --xml-query=<xml>

This option allows you to use a full-fledged XML query expression as filter for the export operation. Make sure the query expression is valid and adheres to the specification for XML query expressions provided in Section 4.5.8. You can also use an @-file to provide the query expression (see Section 4.9.9.3). This option cannot be used with any other filter option of the export command.

Database connection options

The following options allow you to define the connection details that shall be used for establishing a connection to the 3D City Database. You can also use environment variables for this purpose (see Section 4.9.8).

-T, --db-type=<database>

Specify the database system used for running the 3DCityDB. Allowed values are postgresql for PostgreSQL/PostGIS databases (default), and oracle for Oracle Spatial/Locator databases.

-H, --db-host=<host>

Specify the host name of the machine on which the 3DCityDB database server is running.

-P, --db-port=<port>

Specify the TCP port on which the 3DCityDB database server is listening for connections. The default value is 5432 for PostgreSQL and 1521 for Oracle.

-d, --db-name=<name>

Specify the name of the 3DCityDB database to connect to. When connecting to an Oracle database, provide the database SID or service name as value.

-S, --db-schema=<schema>

Name of the database schema to use when connecting to the 3DCityDB. If not provided, the citydb schema is used for PostgreSQL by default, whereas the schema of the user specified by the option --db-username is used under Oracle.

-u, --db-username=<name>

Connect to the 3DCityDB database server as the user given by name.

-p, --db-password[=<password>]

Specify the password to use when connecting to the 3DCityDB database server. You can either provide the password as value for this option or leave the value empty to be prompted to enter the password on the console before connecting to the database. If you skip this option completely, the impexp tool will try to connect to the database without a password. If the database server requires password authentication and a password is not available by other means, the connection attempt will fail in this case.

Examples

$ impexp export -H localhost -d citydb_v4 -u citydb_user -p my_password -o my_city.gml

Export the entire database content as CityGML to the output file my_city.gml. The 3DCityDB to connect to is supposed to be running on a PostgreSQL database on the same machine. The connection will be established to the citydb_v4 database with the user citydb_user and the password my_password.

$ impexp export -H localhost -d citydb_v4 -u citydb_user -p my_password \
                -t Building -b 13.3508824,52.4799281,13.3578297,52.4862805,4326 \
                -o my_city.json

Only export Building features overlapping with the provided bounding box from the database. The coordinates of the bounding box are given in WGS84. For this reason, the fifth value 4326 of the -b option denotes the SRID that is used by the target database for the WGS84 reference system. The output format is CityJSON.

$ impexp export -H localhost -d citydb_v4 -u citydb_user -p my_password \
                -i ID_0815,ID_0816 -l 1,2,3 --lod-mode=maximum
                --compressed-format=citygml -o my_city.zip

Export the top-level city objects with the identifiers ID_0815 and ID_0816 from the database if they have an LoD representation in either LoD 1, 2 or 3. From the matching LoD representations, only export the highest LoD. The output file will be a compressed ZIP archive containing a CityGML file with the exported city objects.

$ impexp export -H localhost -d citydb_v4 -u citydb_user -p my_password \
                -s "select cityobject_id from cityobject_genericattrib \
                    where attrname='energy_level' and realval < 12" \
                -o my_city.zip

Export all city objects satisfying the given SQL SELECT statement.

$ impexp export -H localhost -d citydb_v4 -u citydb_user -p my_password \
                -s "select cityobject_id from cityobject_genericattrib \
                    where attrname='energy_level' and realval < 12" \
                -o my_city.zip

Export all city objects satisfying the given SQL SELECT statement.

$ impexp export -H localhost -d citydb_v4 -u citydb_user -p my_password \
                @/path/to/xml-query -o my_city.gml

Export all city objects satisfying the given XML query expression. To avoid possible system limitations on the length of the command line, the XML query is stored in a separate argument file called xml-query which is referenced from the command line using the @-file notation. The content of the xml-query file is shown below:

-q
'<query xmlns="http://www.3dcitydb.org/importer-exporter/config"> \
  <typeNames> \
    <typeName xmlns:bldg="http://www.opengis.net/citygml/building/2.0">bldg:Building</typeName> \
  </typeNames> \
  <filter> \
    <bbox> \
      <envelope srid="4326"> \
        <lowerCorner>13.3508824 52.4799281</lowerCorner> \
        <upperCorner>13.3578297 52.4862805</upperCorner> \
      </envelope> \
    </bbox> \
  </filter> \
  <sortBy> \
    <sortProperty> \
      <valueReference>bldg:measuredHeight</valueReference> \
    </sortProperty> \
  </sortBy> \
  <limit> \
    <count>20</count> \
  </limit> \
</query>'

According to this query expression, only the first 20 buildings satisfying the provided bounding box filter and sorted by their bldg:measuredHeight attribute will be exported.

Note

For the above @-file xml-query to work, the following requirements must be met:

  • The entire XML query expression is the value of the -q option and, thus, must be put on a single line in the argument file. Either avoid line breaks in your XML or escape them using a backslash \ character like in the above example.
  • Since the XML query expression contains whitespace, it must be put in double or single quotes. When using double quotes, all double quotes of the query expression itself must be escaped. To avoid escaping, the above example uses single quotes.

Visualization export command

Synopsis

impexp export-vis [-hjVz] [--ade-extensions=<folder>] [-c=<file>]
                  [--log-file=<file>] [--log-level=<level>] -o=<file>
                  [--pid-file=<file>] [--plugins=<folder>]
                  [--use-plugin=<plugin[=true|false]>[,<plugin
                  [=true|false]>...]]... [--worker-threads=<threads[,max]
                  >] [-D=<form[=pixels]>[,<form[=pixels]>...] [-D=<form
                  [=pixels]>[,<form[=pixels]>...]]... -l=<0..4 | halod>
                  [-a=<theme>]] [[[-t=<[prefix:]name>[,<[prefix:]
                  name>...]]... [--namespace=<prefix=name>[,
                  <prefix=name>...]]...] [[-r=<version>] [-R=<timestamp[,
                  timestamp]>]] [-i=<id>[,<id>...] [-i=<id>[,
                  <id>...]]...] [-b=<minx,miny,maxx,maxy[,srid]>]
                  [-g=<rows,columns | auto[=length]>] [-s=<select>]]
                  [[-B] [--no-surface-normals] [-C] [-f=<0..1>]
                  [-x=<mode>] [--no-pot-atlases]] [-G
                  [--gltf-version=<version>] [--gltf-converter=<file>]
                  [--gltf-embed-textures] [--gltf-binary]
                  [--gltf-draco-compression] [-m]] [[-A=<mode>]
                  [-O=<number|globe|generic>]
                  [--google-elevation-api=<api-key>]
                  [--transform-height]] [[-T=<database>] [-H=<host>]
                  [-P=<port>] [-d=<name>] [-S=<schema>] [-u=<name>] [-p
                  [=<password>]]] [@<filename>...]

Description

The export-vis command exports top-level city objects from the 3D City Database in KML/COLLADA/glTF format for visualization. It corresponds to the visualization export operation offered on the VIS Export tab of the graphical user interface (see Section 4.5). The command provides a range of options to adapt the export process. If you miss settings available for the GUI version of this command, you can use the --config option to pass a config file containing these settings.

General options

-o, --output=<file>

Specify the path where where the main KML output file should be saved.

-z, --kmz

Flag to indicate that KML/COLLADA output should be packaged into a KMZ archive per tile and display form (see Section 4.6.7.1 for more details). This option cannot be used with glTF exports.

-j, --json-metadata

Flag to indicate that metadata about the exported top-level features should be recorded in a separate JSON file. Further details are provided in Section 4.6.7.1.

--worker-threads=<threads[,max]>

Number of parallel threads to use for the visualization export process. You can also specify a minimum and maximum number of threads separated by commas.

Display options

-D, --display-form=<form[=pixels]>[,<form[=pixels]>...]

This option controls the display forms to be exported. The display forms are given as comma-separated list of one or more form[=pixels] pairs. Allowed values for the display form are collada, geometry, extruded, and footprint. For each display form, its visibility in terms of screen pixels can be optionally defined. If omitted, the display form will always be visible in the viewer. More information about display forms and their visibility can be found in Section 4.6.

-l, --lod=<0..4 | halod>

Define the LoD representation of the city objects that shall be used for creating the visualization models. The value must either be given as integer between 0 and 4 to export a specific LoD. Alternatively, you can set the value to halod in which case the highest available LoD representation will be chosen for each city object.

-a, --appearance-theme=<theme>

Appearance theme to use for texturing and coloring the exported city objects. Use none as value to address appearance features in the database lacking a theme attribute. This option is only considered for COLLADA/glTF exports.

Query and filter options

The export-vis command offers additional options to define both thematic and spatial filters that are used to restrict the visualization export to a subset of the top-level city objects stored in the 3D City Database.

-t, --type-name=<[prefix:]name>[,<[prefix:]name>...]

Comma-separated list of one or more names of the top-level feature types to be exported. The type names are case sensitive and shall match one of the official CityGML feature type names or a feature type name from a CityGML ADE. To avoid ambiguities, you can use an optional prefix for each name. The prefix must be associated with the official XML namespace of the feature type. You can either use the official CityGML namespace prefixes listed in Table 4.11. Or you can use the --namespace option to declare your own prefixes.

--namespace=<prefix=name>[,<prefix=name>...]

Used to specify namespaces and their prefixes as comma-separated list of one or more prefix=name pairs. The prefixes can be used in other options such as --type-name.

-r, --feature-version=<version>

Specify the version of the top-level features to use for the visualization export. Allowed values are latest, at, between, terminated, terminated_at and all. When choosing latest, only those features that have not been terminated in the database are exported, whereas all will export all features. You can also choose to export only features that were valid at a given timestamp using at or for a given time range using between. Likewise, terminated will return all terminated features whereas terminated_at will select features that were terminated at a given timestamp. In all cases, timestamps must be provided using the --feature-version-timestamp option. Further details about the feature version filter are available in Section 4.6.1.

-R, --feature-version-timestamp=<timestamp[,timestamp]>

One or two timestamps to be used with the --feature-version option. A timestamp can be given as date in the form YYYY-MM-DD or as date-time specified as YYYY-MM-DDThh:mm:ss[(+|-)hh:mm. The date-time format supports an optional UTC offset. Use one timestamp with the at and terminated_at values and two timestamps separated by comma with the between value of the --feature-version option.

-i, --resource-id=<id>[,<id>...]

Comma-separated list of one or more identifiers. Only top-level features having a matching value for their identifier attribute will be exported.

-b, --bbox=<minx,miny,maxx,maxy[,srid]>

2D bounding box to use as spatial filter. The bounding box is given by four coordinates that define its lower left and upper right corner. By default, the coordinates are assumed to be in the same CRS that is used by the 3DCityDB instance. Alternatively, you can provide the database srid of the CRS associated with the coordinates as fifth value (e.g. 4326 for WGS84). All values must be separated by commas. The bounding box is evaluated against the GMLID column of the CITYOBJECT table.

-g, --tiling=<rows,columns | auto[=length]>

Use this option to enable tiling for the visualization export. The bounding box of the entire export area is split into a regular grid, and each grid cell is exported as separate tile. You can either define the number of rows and columns separated by a comma for the grid, or use auto[=length] as value. In the latter case, the bounding box will be automatically split into tiles having a size of the given length value. If length is omitted, a default side length of 125m is used. The bounding box used for tiling is either specified through the --bbox option or calculated automatically in case --bbox is not provided. More information about tiled exports is provided in Section 4.6.

-s, --sql-select=<select>

Provide an SQL SELECT statement to be used as SQL filter when querying the database. In general, any SELECT statement can be used as long as it returns a list of database IDs of the selected city objects (see Section 4.6.4 for more information). You can also use an @-file to provide the SELECT statement (see Section 4.9.9.3).

COLLADA/glTF rendering options

The following options are specific to COLLADA/glTF exports. They are ignored if the list of display forms given by the --display-form option does not contain the value collada.

-B, --double-sided

Flag to disable backface culling and rather force a viewer to render both sides of each polygons. Be aware that this might decrease the visualization performance.

--no-surface-normals

If this flag is set, surface normals of polygons are not stored in the output. When exporting textured models, surface normals often do not increase the visual quality and can be omitted.

-C, --crop-textures

Use this flag to let the export operation cut each texture image to the minimum size required for texturing the corresponding polygon. This option can help to avoid loading massive texture data into a viewer.

-f, --texture-scale-factor=<0..1>

Scale down texture images by the given factor. A value between 0 and 1 must be provided. The default value of 1 means no scaling.

-x, --texture-atlas=<mode>

Specify if and how texture atlases should be created for each top-level feature from the exported texture images. Allowed values are none, basic, tpim, and tpim_wo_rotation. By default, texture atlases are created for all visualization exports using the basic mode. Use none to disable the creation of texture atlases. The further values correspond to the names of the supported algorithms for creating texture atlases as described in Section 4.6.7.1.

--no-pot-atlases

By default, texture atlases are created with power-of-two (PoT) side lengths. Use this flag to disable the default behaviour.

glTF export options

The export-vis command supports exporting glTF models by converting the COLLADA output to glTF on the fly. This is achieved by using the open source COLLADA2glTF converter tool. The following options control the glTF export.

-G, --gltf

Set this flag on your command line to enable the glTF export. This requires that collada has been selected as display form with the --display-form option.

--gltf-version=<version>

Specify the glTF version to use for the export. Allowed values are 1.0 and 2.0 (default).

--gltf-converter=<file>

Provide the path to the executable of the COLLADA2glTF converter tool. By default, the COLLADA2glTF tool shipped with the Importer/Exporter is used in the conversion. Be careful when using a different version of the tool as this might result in unexpected behaviour. More information is provided in Section 4.6.7.1.

--gltf-embed-textures

By default, texture images are exported as separate files relative to the location of the glTF output. With this flag, texture images are rather embedded in the glTF files by encoding the texture data using a base 64 encoding.

--gltf-binary

Flag to indicate that the glTF output should be converted and compressed to binary glTF format.

--gltf-draco-compression

Flag to indicate that geometry data should be compressed using the Google Draco compression technology. Draco compression is only available for glTF version 2.0, so make sure the --gltf-version option is correctly set.

-m, --remove-collada

Use this flag to remove the COLLADA files after the conversion and only keep the glTF output as result.

Database connection options

The following options allow you to define the connection details that shall be used for establishing a connection to the 3D City Database. You can also use environment variables for this purpose (see Section 4.9.8).

-T, --db-type=<database>

Specify the database system used for running the 3DCityDB. Allowed values are postgresql for PostgreSQL/PostGIS databases (default), and oracle for Oracle Spatial/Locator databases.

-H, --db-host=<host>

Specify the host name of the machine on which the 3DCityDB database server is running.

-P, --db-port=<port>

Specify the TCP port on which the 3DCityDB database server is listening for connections. The default value is 5432 for PostgreSQL and 1521 for Oracle.

-d, --db-name=<name>

Specify the name of the 3DCityDB database to connect to. When connecting to an Oracle database, provide the database SID or service name as value.

-S, --db-schema=<schema>

Name of the database schema to use when connecting to the 3DCityDB. If not provided, the citydb schema is used for PostgreSQL by default, whereas the schema of the user specified by the option --db-username is used under Oracle.

-u, --db-username=<name>

Connect to the 3DCityDB database server as the user given by name.

-p, --db-password[=<password>]

Specify the password to use when connecting to the 3DCityDB database server. You can either provide the password as value for this option or leave the value empty to be prompted to enter the password on the console before connecting to the database. If you skip this option completely, the impexp tool will try to connect to the database without a password. If the database server requires password authentication and a password is not available by other means, the connection attempt will fail in this case.

Examples

$ impexp export-vis -T oracle -H localhost -d citydb_v4 -u citydb_user -p my_password \
                    -D collada -l 2 -a visual -o my_vis.kml

Export all city objects from the database as COLLADA in LoD2. The appearance theme visual is used for texturing/coloring the objects. The output is stored in the main KML file my_vis.kml that can be loaded with a viewer. The 3DCityDB to connect to is supposed to be running on an Oracle database on the same machine. The connection will be established to the citydb_v4 database with the user citydb_user and the password my_password.

$ impexp export-vis -H localhost -d citydb_v4 -u citydb_user -p my_password \
                    -D footprint=50,extruded=125,collada=200 -l halod \
                    -t Building -b 13.3508824,52.4799281,13.3578297,52.4862805,4326 \
                    -g 3,4 -o my_vis.kml

Export all Building features inside the given bounding box as footprint, extruded and collada models using their highest available LoD representation. Tiling is enabled for this export, and 3x4 tiles are created per display form. Each display form is assigned a different visibility value. A footprint tile will become visible in the viewer as soon as it occupies 50 square pixels of screen space. When zooming in, the viewer will switch to extruded and finally to collada as soon as their visibility value is reached.

$ impexp export-vis -H localhost -d citydb_v4 -u citydb_user -p my_password \
                    -D geometry -l 2 \
                    -t CityFurniture,Bridge -g auto \
                    -z -j -o my_vis.kml

Export all CityFurniture and Bridge objects as geometry from LoD2. Again, tiling is applied to the export. Since a --bbox option is not provided, the bounding box is automatically calculated from all features. This bounding box is then automatically tiled using a default side length of 125m per tile. Moreover, each tile file is compressed as ZIP archive and an additional JSON file containing metadata about all exported features is created.

$ impexp export-vis -H localhost -d citydb_v4 -u citydb_user -p my_password \
                    -D collada -l halod \
                    -i ID_0815,ID_0816 --double-sided \
                    -a visual -x tpim_wo_rotation -C \
                    -o my_vis.kml

Export the top-level city objects with the identifiers ID_0815 and ID_0816 as collada using their highest available LoD representation. Textures shall be exported from the appearance theme visual. The texture images are first cropped and then packaged into texture atlases using the tpim_wo_rotation algorithm. Moreover, backface culling is disabled for this export.

$ impexp export-vis -H localhost -d citydb_v4 -u citydb_user -p my_password \
                    -D collada -l 3 \
                    -s "select cityobject_id from cityobject_genericattrib \
                    where attrname='energy_level' and realval < 12" \
                    -G --gltf-binary --gltf-draco-compression -m
                    -o my_vis.kml

Export all city objects satisfying the given SQL SELECT statement as collada based on their LoD3 geometries. The models are converted to binary glTF format and Draco compression is applied to the geometries. The intermediate COLLADA output is removed for the final result.

Delete command

Synopsis

impexp delete [-hvV] [--ade-extensions=<folder>] [-c=<file>]
              [--delete-log=<file>] [--log-file=<file>]
              [--log-level=<level>] [-m=<mode>] [--pid-file=<file>]
              [--plugins=<folder>] [--use-plugin=<plugin[=true|false]>[,
              <plugin[=true|false]>...]]... [[-g]]
              [[--lineage=<lineage>] [--updating-person=<name>]
              [--reason-for-update=<reason>]] [[[-t=<[prefix:]name>[,<
              [prefix:]name>...]]... [--namespace=<prefix=name>[,
              <prefix=name>...]]...] [[-r=<version>] [-R=<timestamp[,
              timestamp]>]] [-i=<id>[,<id>...] [-i=<id>[,<id>...]]...]
              [--db-id=<id>[,<id>...] [--db-id=<id>[,<id>...]]...]
              [-b=<minx,miny,maxx,maxy[,srid]> [--bbox-mode=<mode>]]
              [[--count=<count>] [--start-index=<index>]] [-s=<select>]
              [-q=<xml>]] [-f=<file> [-w] [[-C=<type>] [[-n=<name>]
              [-I=<index>] [--[no-]header] [-D=<string>] [-Q=<char>]
              [--quote-escape=<char>] [-M=<char>]
              [--csv-encoding=<encoding>]]]] [[-T=<database>]
              [-H=<host>] [-P=<port>] [-d=<name>] [-S=<schema>]
              [-u=<name>] [-p[=<password>]]] [@<filename>...]

Description

The delete command deletes top-level city objects from the 3D City Database. The city objects can either be physically deleted or just terminated. In the latter case, the objects remain in the database and their terminationDate attribute is set to the time of the delete operation. This way, terminated objects can be easily identified and filtered in, for instance, export operations using a feature version filter.

The delete command supports both thematic and spatial filters to specify the features to be deleted. In addition, you can also use a delete list. Delete lists are CSV files that contain a list of identifiers (and possibly further data). Each city object in the database whose identifier matches an entry in the delete list will be deleted. For example, the import log file created when importing CityGML/CityJSON files into the database (see Section 4.4.6.11) can be used as delete list. This comes in very handy when you want to “rollback” an import process, for instance, because it aborted due to errors.

The top-level city objects affected by the delete operation can optionally be recorded in a separate delete log. The delete log is a comma-separated value (CSV) file that contains the type name, the database ID and the object identifier of each city object as separate records.

A corresponding delete operation is not offered by the graphical user interface.

General options

-m, --delete-mode=<mode>

Specify the delete mode. Allowed values are delete and terminate. Be careful to make the right choice here. When choosing delete, the city objects will be permanently removed from the database. This operation cannot be undone. Terminated objects remain in the database. However, if you only terminate but never delete, your database will grow over time. The default mode is delete.

Note

City objects that are already terminated in the database will not be terminated again when running in terminate mode. Thus, their terminationDate will not be updated. However, terminated city objects can, of course, be deleted from the database.

-v, --preview

This flag allows you to tun the delete command in preview mode. The delete operation will print the number and types of features that would be affected by the delete operation as summary overview to the console.

--delete-log=<file>

If you want a delete log to be created for all top-level features deleted from the database, provide the path to the delete log file with this option.

-g, --cleanup-global-appearances

Flag to indicate that global appearances should be cleaned up after having deleted the city objects. Global appearances are not automatically deleted because they can reference more than one city object. If this option is set, the delete operation will search the database for global appearances that do not reference any city object anymore. Only these appearances will be removed from the database.

Metadata options

The delete command allows for specifying metadata that is assigned to every city object when terminate is chosen as delete mode. The values are stored in columns of the table CITYOBJECT.

--lineage=<lineage>

Value to store as lineage of the city objects.

--updating-person=<name>

Name of the user responsible for the delete. By default, the name of the database user is chosen.

--reason-for-update=<reason>

Reason for deleting the data.

Query and filter options

The delete command offers additional options to define both thematic and spatial filters that are used to more precisely specify the top-level city objects to be deleted from the 3D City Database.

-t, --type-name=<[prefix:]name>[,<[prefix:]name>...]

Comma-separated list of one or more names of the top-level feature types to be deleted. The type names are case sensitive and shall match one of the official CityGML feature type names or a feature type name from a CityGML ADE. To avoid ambiguities, you can use an optional prefix for each name. The prefix must be associated with the official XML namespace of the feature type. You can either use the official CityGML namespace prefixes listed in Table 4.11. Or you can use the --namespace option to declare your own prefixes.

--namespace=<prefix=name>[,<prefix=name>...]

Used to specify namespaces and their prefixes as comma-separated list of one or more prefix=name pairs. The prefixes can be used in other options such as --type-name.

-r, --feature-version=<version>

Specify the version of the top-level features to use for the delete operation. Allowed values are latest, at, between, terminated, terminated_at and all. When choosing latest, only those features that have not been terminated in the database are deleted, whereas all will delete all features. You can also choose to delete only features that were valid at a given timestamp using at or for a given time range using between. Likewise, terminated will delete all terminated features whereas terminated_at will select features that were terminated at a given timestamp. In all cases, timestamps must be provided using the --feature-version-timestamp option.

-R, --feature-version-timestamp=<timestamp[,timestamp]>

One or two timestamps to be used with the --feature-version option. A timestamp can be given as date in the form YYYY-MM-DD or as date-time specified as YYYY-MM-DDThh:mm:ss[(+|-)hh:mm. The date-time format supports an optional UTC offset. Use one timestamp with the at and terminated_at values and two timestamps separated by comma with the between value of the --feature-version option.

-i, --resource-id=<id>[,<id>...]

Comma-separated list of one or more identifiers. Only top-level features having a matching value for their identifier attribute will be deleted.

--db-id=<id>[,<id>...]

Comma-separated list of one or more database IDs. Only top-level features having a matching primary key for the ID column of the CITYOBJECT table will be deleted.

-b, --bbox=<minx,miny,maxx,maxy[,srid]>

2D bounding box to use as spatial filter. The bounding box is given by four coordinates that define its lower left and upper right corner. By default, the coordinates are assumed to be in the same CRS that is used by the 3DCityDB instance. Alternatively, you can provide the database srid of the CRS associated with the coordinates as fifth value (e.g. 4326 for WGS84). All values must be separated by commas. The bounding box is evaluated against the GMLID column of the CITYOBJECT table.

--bbox-mode=<mode>

Choose the mode of the bounding box filter. Allowed values are overlaps (default) and within. When set to overlaps, all features overlapping with the bounding box are deleted. Otherwise, features must be within the given bounding box. Can only be used together with the --bbox option.

--count=<count>

Maximum number of top-level features to be deleted.

--start-index=<index>

Index within the result set of all top-level features from which the delete operation shall begin. The start index uses zero-based numbering. Thus, the first top-level feature is assigned the index 0, rather than the index 1.

-s, --sql-select=<select>

Provide an SQL SELECT statement to be used as SQL filter when querying the database. In general, any SELECT statement can be used as long as it returns a list of database IDs of the selected city objects (see Section 4.5.3 for more information). You can also use an @-file to provide the SELECT statement (see Section 4.9.9.3).

-q, --xml-query=<xml>

This option allows you to use a full-fledged XML query expression as filter for the delete operation. Make sure the query expression is valid and adheres to the specification for XML query expressions provided in Section 4.5.8. You can also use an @-file to provide the query expression (see Section 4.9.9.3). This option cannot be used with any other filter option of the delete command.

Delete list options

You can also pass a delete list to the delete command to control which city objects should be deleted. A delete list can be used in addition to or instead of the above filter options.

Delete lists are simple comma-separated values (CSV) files that provide the identifiers of the city objects to be deleted. Each identifier must be put on a separate line (row) of the file, and each line may contain additional values (columns) separated by a delimiter (typically a single reserved character such as comma, semicolon, tab, etc.). The first record may be reserved as header containing a list of column names. Usually, every row has the same sequence of columns. If a line starts with a predefined comment marker (typically a single reserved character such as #), the entire row is ignored and skipped.

Due to their simple structure, delete lists can be easily created with external tools and processes. The following snippet shows an example of a simple delete list that can be used with the delete command. It just provides an identifier per row. The first line is used as header.

1
2
3
4
5
6
GMLID
ID_0815
ID_0816
ID_0817
ID_0818
...

The following options let you define the layout and reserved characters of the delete list you want to use with the delete command.

-f, --delete-list=<file>

Specify the path to the delete list file to use in the delete operation.

-w, --delete-list-preview

Use this option to get a preview of the first few lines of the delete list when applying the provided options for parsing and interpreting the delete list. This preview is very helpful to adapt and specify the delimiter character(s), quoting rules, header information, identifier column name or index, etc. The preview is printed to the console.

-n, --id-column-name=<name>

Name of the column that holds the identifier value. Using this option only makes sense if the delete list contains a header line. Otherwise, use the --id-column-index option to specify the column index.

-I, --id-column-index=<index>

Index of the column that holds the identifier value. The first column of a row has the index 1. If this option is omitted, the value of the first column of each row will be used as identifier by default. This option is mutually exclusive with the --id-column-name option (only specify one).

-C, --id-column-type=<type>

Specify the type of the identifier. Allowed values are resource and db. When choosing resource, the identifiers provided in the delete list are interpreted as object identifiers (e.g., gml:id in CityGML) and are therefore matched against the GMLID column of the CITYOBJECT table. In contrast, the identifiers are taken as database IDs and checked against the ID column of the CITYOBJECT table when selecting db instead. The default value is resource.

--[no-]header

Define whether or not the delete list uses a header line. By default, the delete operation assumes that the first line contains header information.

-D, --delimiter=<string>

Specify the delimiter used for separating values in the delete list. By default, a comma , is assumed as delimiter. The provided delimiter may consist of more than one character.

-Q, --quote=<char>

The values in the delete list may be quoted (i.e., enclosed by a reserved character). This option lets you define the character used as quote (default: "). Only single characters are allowed as value.

--quote-escape=<char>

If the delete list contains quoted values, define the character used for escaping embedded quote characters. The default escape character is ". Only single characters are allowed as value.

-M, --comment-marker=<char>

Specify the character used as comment marker in the delete list. Any line starting with this comment marker is interpreted as comment and, thus, will be ignored. The default comment marker is #. Only single characters are allowed as value.

--csv-encoding=<encoding>

Define the encoding of the delete list using a IANA-based character encoding name. UTF-8 is assumed as default encoding.

Database connection options

The following options allow you to define the connection details that shall be used for establishing a connection to the 3D City Database. You can also use environment variables for this purpose (see Section 4.9.8).

-T, --db-type=<database>

Specify the database system used for running the 3DCityDB. Allowed values are postgresql for PostgreSQL/PostGIS databases (default), and oracle for Oracle Spatial/Locator databases.

-H, --db-host=<host>

Specify the host name of the machine on which the 3DCityDB database server is running.

-P, --db-port=<port>

Specify the TCP port on which the 3DCityDB database server is listening for connections. The default value is 5432 for PostgreSQL and 1521 for Oracle.

-d, --db-name=<name>

Specify the name of the 3DCityDB database to connect to. When connecting to an Oracle database, provide the database SID or service name as value.

-S, --db-schema=<schema>

Name of the database schema to use when connecting to the 3DCityDB. If not provided, the citydb schema is used for PostgreSQL by default, whereas the schema of the user specified by the option --db-username is used under Oracle.

-u, --db-username=<name>

Connect to the 3DCityDB database server as the user given by name.

-p, --db-password[=<password>]

Specify the password to use when connecting to the 3DCityDB database server. You can either provide the password as value for this option or leave the value empty to be prompted to enter the password on the console before connecting to the database. If you skip this option completely, the impexp tool will try to connect to the database without a password. If the database server requires password authentication and a password is not available by other means, the connection attempt will fail in this case.

Examples

$ impexp delete -H localhost -d citydb_v4 -u citydb_user -p my_password

Delete all city objects from the database. The 3DCityDB to connect to is supposed to be running on a PostgreSQL database on the same machine. The connection will be established to the citydb_v4 database with the user citydb_user and the password my_password.

Caution

Be very careful with the above example and only use it in case you are absolutely sure. There will be no confirmation prompt and no undo operation (which is also true for all further examples…). If you really want to clean the entire database, the cleanup_schema database function provided by the 3DCityDB should be your preferred choice simply because it is much faster.

$ impexp delete -H localhost -d citydb_v4 -u citydb_user -p my_password \
                -m terminate -v \
                -t Building -b 13.3508824,52.4799281,13.3578297,52.4862805,4326 \
                --bbox-mode=within

Only terminate Building features within the given bounding box. The affected buildings are kept in database but marked as terminated by setting their terminationDate attribute. The entire process is run in preview mode. Thus, a summary overview of the number of affected feature is printed to the console but nothing will be committed to the database.

$ impexp delete -H localhost -d citydb_v4 -u citydb_user -p my_password \
                -m terminate -g \
                -s "select cityobject_id from cityobject_genericattrib \
                    where attrname='energy_level' and realval < 12" \
                --count 20

Terminate the first 20 top-level city objects satisfying the given SQL SELECT statement. Afterwards, global appearances not referencing a city object anymore will be deleted from the database.

$ impexp delete -H localhost -d citydb_v4 -u citydb_user -p my_password \
                -f imported-features.log -I 2 -C db

Only delete those top-level city objects having a database ID that is contained in the provided delete list imported-features.log. The command uses default values for parsing and interpreting the delete list except for the index of the column that holds the identifier values, which is set to 2, and the type of the identifier, which is set to db.

The above example has been chosen deliberately because it illustrates how to use an import log file created by a CityGML/CityJSON import operation as delete list for the delete command. The general layout and content of an import log file is discussed in Section 4.4.6.11. The snippet below shows an example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#3D City Database Importer/Exporter, version "4.3.0"
#Database connection: citydb_user@localhost:5432/devel
#Timestamp: 2021-04-19 21:30:40
FEATURE_TYPE,CITYOBJECT_ID,GMLID_IN_FILE,INPUT_FILE
Building,532,BLDG_0003000000106562,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
Building,540,BLDG_00030000001065f4,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
Building,552,BLDG_0003000000106543,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
Building,556,BLDG_0003000a000afa94,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
Building,563,BLDG_0003000a0019a1c6,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
Building,533,BLDG_0003000000106686,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
Building,542,BLDG_0003000a000afacf,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
Building,548,BLDG_0003000a000afb3b,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
Building,560,BLDG_0003000e009b5355,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
Building,568,BLDG_0003000b003d0d1b,C:\data\citygml\berlin\gasometer\tile_0_0\Gasometer.gml
#Import successfully finished.

The import log uses a comma , as delimiter. Lines 1-3 and line 15 are comments starting with the comment marker # and should therefore be ignored. Line 4 is used as header to provide column names for the subsequent rows. The actual content is therefore provided in lines 5-14. Each row consists of four columns, and the second column CITYOBJECT_ID contains the database ID of the imported top-level features.

As you can see from the above example, the default CSV options of the delete command are already suited to correctly identify the comments, the header, and the separate columns of the import log file. As mentioned above, we only have to specify that we want to use database IDs as identifiers and provide the correct column index on the command line.

Note

The third column GMLID_IN_FILE of the import log file holds the object identifier of the imported city objects. To use this object identifier in the delete process, you just have to use -I 3 to specify the third column and omit the additional -C db option. However, be careful when using the GMLID_IN_FILE values because the object identifiers in the database might be different due to import preference settings.

Validate command

Synopsis

impexp validate [-hV] [--ade-extensions=<folder>] [-c=<file>]
                [--log-file=<file>] [--log-level=<level>]
                [--pid-file=<file>] [--plugins=<folder>]
                [--use-plugin=<plugin[=true|false]>[,<plugin[=true|false]
                >...]]... [@<filename>...] <file>...

Description

The validate command validates one ore more CityGML or CityJSON files against the official CityGML XML and CityJSON schemas. It corresponds to the validate functionality of the import operation offered on the Import tab of the graphical user interface (see Section 4.4). The validation does not require internet access since the schemas are packaged with the application. Validation errors are reported on the console.

Note

CityGML ADE schemas are automatically considered in the validation process if the ADE has been correctly registered with the Importer/Exporter (see Section 5.3 for more details). This way, also ADE data can be validated. CityJSON Extension schemas are, however, not supported by the validation process. Please use an external tool like cjio to validate such datasets.

Note

The impexp tool terminates with exit code 1 in case the validation fails.

Options

<file>...

One or more input files or directories to be validated. This parameter is mandatory. Glob patterns are supported to specify a set of filenames using wildcard characters (e.g., /path/to/*.gml). If a directory is provided, it will be recursively scanned for CityGML/CityJSON files. The supported input file formats and extensions are identical with those listed in Table 4.7.

Examples

$ impexp validate /path/to/my_city.gml /path/to/my_city.json

Validate the CityGML file my_city.gml and the CityJSON file my_city.json against the respective schema definition.

GUI command

Synopsis

impexp gui [-hV] [--no-splash] [--ade-extensions=<folder>] [-c=<file>]
           [--log-file=<file>] [--log-level=<level>] [--pid-file=<file>]
           [--plugins=<folder>] [--use-plugin=<plugin[=true|false]>[,
           <plugin[=true|false]>...]]... [@<filename>...]

Description

The only purpose of the gui command is to launch the graphical user interface of the Importer/Exporter. In contrast to the start script 3DCityDB-Importer-Exporter provided in the installation directory, you can use the global and command-specific options in the launch process, for instance, to load a specific config file.

Options

--no-splash

Hide the Importer/Exporter splash screen during startup.

Examples

$ impexp gui -c /path/to/my_config.xml \
             --no-splash --log-level=debug \
             --plugins=/my/plugins/folder/

Launch the GUI of the Importer/Exporter and do not show the splash screen during startup. The settings are loaded from the config file my_config.xml instead of the default config file. Moreover, the log level debug is used and plugins are searched for in the folder /my/plugins/folder/ rather than in the plugins folder within the installation directory.

Environment variables

In addition to the command line options for defining the database connection details, the CLI also supports the following environment variables for this purpose.

CITYDB_TYPE=<postgresql|oracle>

The type of the 3DCityDB to connect to (default: postgresql).

CITYDB_HOST=<hostname or ip>

Name of the host or IP address on which the 3DCityDB is running.

CITYDB_PORT=<port>

Port of the 3DCityDB to connect to (default: 5432 for PostgreSQL, 1521 for Oracle).

CITYDB_NAME=<name>

Name of the 3DCityDB database to connect to.

CITYDB_SCHEMA=<schema>

Schema to use when connecting to the 3DCityDB (default: citydb for PostgreSQL, username for Oracle).

CITYDB_USERNAME=<username>

Username to use when connecting to the 3DCityDB.

CITYDB_PASSWORD=<password>

Password to use when connecting to the 3DCityDB.

The environment variables can be used instead of or together with the command line options. For example, instead of providing the password for connecting to the database as clear text on the command line using the --db-password option, you could store it in the CITYDB_PASSWORD variable. The following snippet illustrates this example for a UNIX/Linux system, where the export command is used for setting environment variables. Use the set command under Windows instead.

$ export CITYDB_PASSWORD=my_password
$ impexp export -H localhost -d citydb_v4 -u citydb_user -o my_city.gml

Note

The command line options always take precedence. For example, if you additionally provide the password using the --db-password in the above example, the value of the CITYDB_PASSWORD variable will be ignored. The environment variables are always ignored when running the Importer/Exporter in GUI mode.

Hint

The environment variables can also be used to configure the database connection when running the Importer/Exporter as Docker container or when using the Web Feature Service (dockerized or not).

General

The following sections cover more general functionalities and usage information for the impexp command-line tool.

Abbreviated options

The impexp tool can recognize abbreviated option names. This way you can avoid typing long options such as --gltf-draco-compression from the export-vis command. To abbreviate an option, simply specify the initial letter(s) of the first component of its name and optionally of one or more subsequent components. The name “components” are separated by - dash characters or by case if the option name uses camelCase.

For example, the option --gltf-draco-compression has three components. Valid abbreviations for this option name are listed below.

Examples of valid option name abbreviations
--glt, --gDC, --g-d-c, --g-dra, --g-com, --gCom, …

However, not all of the valid abbreviations shown in Table 4.28 are also allowed and would work when provided on the command line. The reason is that abbreviations must be unambiguous and may not match multiple options.

For example, the export-vis command also offers the option --gltf-version. The abbreviation --glt would match both --gltf-draco-compression and --gltf-version and, thus, is not allowed. The impexp tool aborts with an error message in case of ambiguous abbreviations.

Double dash

To explicitly separate command-line options from the list of positional parameters, you can use two double dashes -- without any characters attached as delimiter. For example, consider the following command to import the CityGML file my_city.gml into the 3D City Database.

$ impexp import -H localhost -d citydb_v4 -u citydb_user -p my_city.gml

The -p option of the import command specifies the password to use when connecting to the 3DCityDB. The option can either take a value or be empty. In the latter case, the user is prompted for the password. In the above example, the intention of the user was to leave the password option empty. However, the impexp will use the filename my_city.gml as value for -p instead. The reason is that the filename is a positional parameter and cannot be distinguished from the option value of -p in this example.

To solve this issue, simply separate the filename parameter from the options using two dashes.

$ impexp import -H localhost -d citydb_v4 -u citydb_user -p -- my_city.gml

Reordering the options is also a valid solution for this example.

$ impexp import -H localhost -d citydb_v4 -p -u citydb_user my_city.gml
Using @-files

When creating a command line with lots of options or with long arguments for options, you might run into system limitations on the length of the command line. Argument files (@-files) are a way to overcome this problem. Argument files are files that themselves contain arguments to the command. You can provide one or more argument files on the command line by simply prefixing their filenames with the character @. The content of each @-file is automatically expanded into the argument list.

The options within an @-file can be space-separated or newline-separated. If the value of an option contains embedded whitespace, put the whole value in double " or single ' quotes. Within quoted values, embedded quotes must be escaped with a backslash \ and backslashes themselves need to be escaped with another backslash if they are part of the value. You can also split long option values on multiple lines by escaping each line break with a backslash. Multi-line options are typically only required to increase the readability of the @-file. Lines starting with # are comments and are ignored.

The following example shows a simple @-file containing options to be used with the export command.

1
2
3
4
5
# This line is a comment and is ignored.
--log-level=debug
--bbox=13.3508824,52.4799281,13.3578297,52.4862805,4326 -g 3,4
--type-name
Building

Line 1 is a comment and is ignored. Line 2 contains a single option, whereas two options separated by a space are put on line 3. Lines 4 and 5 illustrate that you can also put an option and its value on separate lines.

To use an argument file on the command line, use the @ character followed by a relative or absolute path to the file. If the path contains spaces, such as C:\Program Files, you can put the entire path into double quotes "C:\\Program Files". Note that all backslashes in the quoted path must be escaped in this case. To avoid escaping, you can also use C:\Program" "Files instead. If the file does not exist, or cannot be read, then the argument will be treated literally, and not removed. Of course, you can specify multiple @-files on the same command line.

For example, assume the above argument file exists at /home/foo/args. An export command using this argument file can be invoked like shown below.

$ impexp export -H localhost -d citydb_v4 -u citydb_user -p my_password \
                @/home/foo/args -o my_city.gml

The content of the @-file is automatically expanded into the argument list of the above command in the background.

$ impexp export -H localhost -d citydb_v4 -u citydb_user -p my_password \
                --log-level=debug \
                --bbox=13.3508824,52.4799281,13.3578297,52.4862805,4326 -g 3,4 \
                --type-name Building \
                -o my_city.gml

Note

The argument file may itself contain additional @-file arguments. Any such arguments will be processed recursively.

Note

Argument files are also a nice way to create templates for different purposes. For example, you can specify separate files for different logging options such as “full logging” (using the debug log level and an additional log file) and “minimum logging” (only error log level without a log file). Depending on the use case and scenario, you can simply pick one or the other, even programmatically.

In addition to the graphical user interface, the Importer/Exporter also comes with a command-line interface (CLI) letting you execute the main operations of the Importer/Exporter on your command prompt. The CLI also allows you to easily embed the Importer/Exporter in batch processing workflows and third-party applications, and to run in from within a Docker container.

To launch the CLI, simply execute the impexp start script on the command line. The script is located in the bin folder within the installation directory of your Importer/Exporter. Depending on your platform, it comes in two flavors:

  • impexp.bat (Microsoft Windows family)
  • impexp (UNIX/Linux/Mac OS family)

The CLI is launched with default options for the Java Virtual Machine (JVM) that runs the application. You can override these default options in the launch process by using the environment variable JAVA_OPTS. More information is available in Section 4.1.

Note

For convenience, it is recommended to add the impexp executable to your path. Use one of the following commands to do so.

Windows: set PATH=C:\path\to\3DCityDB-Importer-Exporter\bin;%PATH%
Linux: export PATH=/path/to/3DCityDB-Importer-Exporter/bin:$PATH

Note

Instead of using the predefined start script, you can also directly invoke the JAR file impexp-client-<version>.jar in the lib folder of the installation directory, which implements the command-line interface. This way, you have full control over the Java runtime and options for executing the CLI. Use the following command as starting point.

java [JVM_OPTIONS] -jar lib/impexp-client-<version>.jar [CLI_OPTIONS]

Synopsis

impexp [-hV] [--ade-extensions=<folder>] [-c=<file>] [--log-file=<file>]
       [--log-level=<level>] [--pid-file=<file>] [--plugins=<folder>]
       [--use-plugin=<plugin[=true|false]>[,<plugin[=true|false]
       >...]]... [@<filename>...] COMMAND

Description

The impexp interface offers a number of commands to interact with the 3D City Database and to execute import, export and delete operations. The commands have their own options and parameters and are discussed in separate sections of this chapter.

Commands offered by the command-line interface
Subcommand
Description
Displays help information about the specified command.
Imports data in CityGML or CityJSON format.
Exports data in CityGML or CityJSON format.
Exports data in KML/COLLADA/glTF format for visualization.
Deletes top-level city objects from the database.
Validates input files against their schemas.
Starts the graphical user interface.

Note

Plugins can add their own commands to the command-line interface. Thus, the list of available commands may increase depending on the number of plugins you have registered with the Importer/Exporter. Please refer to the user manual of your plugin for a documentation of the plugin-specific commands.

You are required to pass the COMMAND to be executed as mandatory input to the impexp interface. Otherwise, the tool will terminate and print a general help message to the terminal.

The command-line tool uses exit codes to signify success or failure of an operation. In general, the value 0 means success, 1 means the operation aborted abnormally due to errors, and 2 indicates invalid input for an option or parameter. The separate commands may add their own operation-specific exit codes to this list.

Just like with the graphical user-interface, you can use the environment variable JAVA_OPTS in the launch process of the CLI to provide options to the Java Virtual Maching (JVM) that runs the tool. For example, you can adapt the amount of main memory that shall be available for the Importer/Exporter. Please refer to Section 4.1 for how to use the JAVA_OPTS variable. Export users may also directly adapt the impexp start script.

More general functionalities and usage information for the command-line tool are discussed in Section 4.9.9.

Global options

The impexp main command provides global options that can be used with all commands. On the command line, you are free to put them in front of or after the COMMAND that you want to execute.

COMMAND

The name of the COMMAND to execute.

-c, --config=<file>

Use the settings from the specified XML config file for the command to be executed. In general, every command provides a set of command-line options to adapt its behaviour to your needs. These options are meant to cover the most common use cases but do not reflect all settings available from the graphical user interface. If you miss specific settings, you can provide a config file in addition or even instead of the command-line options offered by the command. Note that settings specified through command-line options always take precedence over settings from a config file.

The easiest way to create a config file is to run the Importer/Exporter in GUI mode, make all required settings, and then save the settings to a file using “File -> Save Settings As…” from the menu bar. You can directly use the resulting file or adapt it to your needs before feeding it to the CLI. Of course, you can also create the config file fully automatically and only with the content you need. Either way, make sure the config file is valid with respect to the XML Schema definition of the Importer/Exporter. You can easily get a copy of the XSD file via the “File -> Save Settings XSD As…” entry from the menu bar.

Note

You can also create a config file programmatically in Java. The JAR file impexp-config-{version}.jar in the lib folder of the installation directory contains all the classes required for reading and writing a config file. Once you have the JAR file on your classpath, use the class org.citydb.config.ConfigUtil as starting point.

--log-level=<level>

Log level to be used for printing log messages to the terminal and a log file. Valid values are error, warn, info, and debug (default: info). See also Section 4.7.7.

--log-file=<file>

If you want the log messages to be printed to a log file in addition to the terminal, provide the full path to the log file with this option. The file will be truncated if it already exists.

--pid-file=<file>

Create a file containing the process ID (PID) of the impexp tool at the specified path. The PID can be used, for instance, to check whether the process is still running or to send signals to the process. The PID file is automatically deleted when the impexp tool terminates.

--plugins=<folder>

Provide an alternative path where to look for Importer/Exporter plugins. By default, plugins are searched for in the plugins folder within the installation directory of the Importer/Exporter.

--use-plugin=<plugin[=true|false]>[,<plugin[=true|false]>...]

Comma-separated list of plugins that shall be enabled (default: true) or disabled (false) on startup. Use the fully qualified class name of the plugin to uniquely identify it. Disabling unnecessary plugins can increase performance.

--ade-extensions=<folder>

Provide an alternative path where to look for ADE extensions. By default, ADE extensions are searched for in the plugins folder within the installation directory of the Importer/Exporter.

@<filename>...

When creating a command line with lots of options or with long arguments for options, you might run into system limitations on the length of the command line. Argument files (@-files) are a way to overcome this problem. Argument files are files that themselves contain arguments to the command. You can provide one or more argument files on the command line by simply prefixing their filenames with the character @. The content of each @-file is automatically expanded into the argument list. See Section 4.9.9.3 for more information.

-h, --help

Display a help message for the specified command and exit.

-V, --version

Print version information and exit.

Using the Importer/Exporter with Docker

The 3D City Database (3DCityDB) Importer/Exporter Docker images expose the capabilities of the Importer/Exporter CLI for dockerized applications and workflows. All CLI commands despite the GUI command are supported.

Synopsis

docker run --rm --name impexp [-i -t] \
    [-e CITYDB_TYPE=postgresql|oracle] \
    [-e CITYDB_HOST=the.host.de] \
    [-e CITYDB_PORT=5432] \
    [-e CITYDB_NAME=theDBName] \
    [-e CITYDB_SCHEMA=theCityDBSchemaName] \
    [-e CITYDB_USERNAME=theUsername] \
    [-e CITYDB_PASSWORD=theSecretPass] \
    [-v /my/data/:/data] \
  3dcitydb/impexp[:TAG] COMMAND

Image variants and versions

The Importer/Exporter Docker images are available based on Debian and Alpine Linux. The Debian version is based on the OpenJDK images, the Alpine Linux variant is based on the non-official images from AdoptOpenJDK. The images are going to use the latest LTS JRE version available at the time a new Importer/Exporter version is released. Table 4.30 gives an overview on the images available.

3DCityDB Importer/Exporter Docker image variants and versions
Tag Debian Alpine
edge deb-build-edge deb-size-edge alp-build-edge alp-size-edge
latest deb-size-latest alp-size-latest
4.3.0 deb-size-v4.3.0 alp-size-v4.3.0
5.0.0 deb-size-v5.0.0 alp-size-v5.0.0

The edge images are automatically built and published on every push to the master branch of the 3DCityDB Importer/Exporter Github repository using the latest stable version of the base images. The latest and release image versions are only built when a new release is published on Github. The latest tag will point to the most recent release version.

The images are available on 3DCityDB DockerHub and can be pulled like this:

docker pull 3dcitydb/impexp:TAG

The image tag is composed of the Importer/Exporter version and the image variant. Debian is the default image variant, where no image variant is appended to the tag. For the Alpine Linux images -alpine is appended. The full list of available tags can be found on DockerHub. Here are some examples for full image tags:

docker pull 3dcitydb/impexp:edge
docker pull 3dcitydb/impexp:latest-alpine
docker pull 3dcitydb/impexp:5.0.0
docker pull 3dcitydb/impexp:5.0.0-alpine

Usage and configuration

The 3DCityDB Importer/Exporter Docker images do not require configuration for most use cases and allow the usage of the Importer/Exporter CLI out of the box. Simply append the Importer/Exporter CLI command you want to execute to the docker run command line.

docker run --rm --name impexp 3dcitydb/impexp COMMAND

However, the database credentials can be passed to the Importer/Exporter container using environment variables as well, as described in Section 4.10.2.1.

All import and export operations require a mounted directory for exchanging data between the host system and the container. Use the -v or --mount options of the docker run command to mount a directory or file.

# mount /my/data/ on the host system to /data inside the container
docker run --rm --name impexp \
    -v /my/data/:/data \
  3dcitydb/impexp COMMAND

# Mount the current working directory on the host system to /data
# inside the container
docker run --rm --name impexp \
    -v $(pwd):/data \
  3dcitydb/impexp COMMAND

Note

The default working directory inside the container is /data.

Tip

Watch out for correct paths when working with mounts! All paths passed to the Importer/Exporter CLI have to be specified from the container’s perspective. If you are not familiar with how Docker manages volumes and bind mounts go through the Docker volume guide.

In order to allocate a console for the container process, you must use -i -t together. This comes in handy, for instance, if you don’t want to pass the password for the 3DCityDB connection on the command line but rather want to be prompted to enter it interactively on the console. You must use the -p option of the Importer/Exporter CLI without a value for this purpose (see Section 4.9) as shown in the example below. Note that the -i -t options of the docker run command are often combined and written as -it.

docker run -it --rm --name impexp \
    -v /my/data/:/data \
  3dcitydb/impexp import \
    -H my.host.de -d citydb -u postgres -p \
    bigcity.gml

The docker run command offers further options to configure the container process. Please check the official reference for more information.

Environment variables

The Importer/Exporter Docker images support the following environment variables to set the credentials for the connection to a 3DCityDB instance (see also Section 4.9.8).

Warning

When running the Importer/Exporter on the command line, the values of these variables will be used as input if a corresponding CLI option is not available. Thus, the CLI options always take precedence.

CITYDB_TYPE=<postgresql|oracle>

The type of the 3DCityDB to connect to. postgresql is the default.

CITYDB_HOST=<hostname or ip>

Name of the host or IP address on which the 3DCityDB is running.

CITYDB_PORT=<port>

Port of the 3DCityDB to connect to. Default is 5432 for PostgreSQL and 1521 for Oracle.

CITYDB_NAME=<dbName>

Name of the 3DCityDB database to connect to.

CITYDB_SCHEMA=<citydb>

Schema to use when connecting to the 3DCityDB (default: citydb | username).

CITYDB_USERNAME=<username>

Username to use when connecting to the 3DCityDB

CITYDB_PASSWORD=<thePassword>

Password to use when connecting to the 3DCityDB

User management and file permissions

When exchanging files between the host system and the Importer/Exporter container it is import to make sure that files and directories have permissions set correctly. For security reasons (see here) the Importer/Exporter runs as non-root user by default inside the container. The default user is named impexp with user and group identifier (uid, gid) = 1000.

$ docker run --rm --entrypoint bash 3dcitydb/impexp \
    -c "cat /etc/passwd | grep impexp"

impexp:x:1000:1000::/data:/bin/sh

As 1000 is the default uid/gid for the first user on many Linux distributions in most cases you won’t notice this, as the user on the host system is going to have the same uid/gid as inside the container. However, if you are facing file permission issues, you can run the Importer/Exporter container as another user with the -u option of the docker run command. This way you can make sure, that the right permissions are set on generated files in the mounted directory.

The following example illustrates how to use the -u option to pass the user ID of your current host’s user.

docker run --rm --name impexp \
    -u $(id -u):$(id -g) \
    -v /my/data/:/data \
  3dcitydb/impexp COMMAND

Build your own images

3DCityDB Importer/Exporter images are easily built on your own. The images support two build arguments:

BUILDER_IMAGE_TAG=<tag of the builder image>

Set the tag of the builder image, which is openjdk for the Debian and adoptopenjdk/openjdk11 for the Alpine image variant. This base image is only used for building the Importer/Exporter from source.

RUNTIME_IMAGE_TAG=<tag of the runtime image>

Set the tag of the runtime image, which is openjdk for the Debian and adoptopenjdk/openjdk11 for the Alpine image variant. This is the base image the container runs with.

Build process

  1. Clone the Importer/Exporter Github repository and navigate to the cloned repo:

    git clone https://github.com/3dcitydb/importer-exporter.git
    cd importer-exporter
    
  2. Build the image using docker build:

# Debian variant
docker build . \
  -t 3dcitydb/impexp

# Alpine variant
docker build . \
  -t 3dcitydb/impexp \
  -f Dockerfile.alpine

Examples

For the following examples we assume that a 3DCityDB instance with the following settings is running:

Example 3DCityDB instance
HOSTNAME      my.host.de
PORT          5432
DB TYPE       postgresql
DB DBNAME     citydb
DB USERNAME   postgres
DB PASSWORD   changeMe!
Importing CityGML

This section provides some examples for importing CityGML datasets. Refer to Section 4.9.2 for a detailed description of the Importer/Exporter CLI import command.

Import the CityGML dataset /home/me/mydata/bigcity.gml on you host system into the DB given in Listing 4.1:

docker run --rm --name impexp \
    -v /home/me/mydata/:/data \
  3dcitydb/impexp import \
    -H my.host.de -d citydb -u postgres -p changeMe! \
    bigcity.gml

Note

Since the host directory /home/me/mydata/ is mounted to the default working directory /data inside the container, you can simply reference your input file by its filename instead of using an absolute path.

Import all CityGML datasets from /home/me/mydata/ on your host system into the DB given in Listing 4.1:

docker run --rm --name impexp \
    -v /home/me/mydata/:/data \
  3dcitydb/impexp import \
    -H my.host.de -d citydb -u postgres -p changeMe! \
    /data/
Exporting CityGML

This section provides some examples for exporting CityGML datasets. Refer to Section 4.9.3 for a detailed description of the Importer/Exporter CLI export command.

Export all data from the DB given in Listing 4.1 to /home/me/mydata/output.gml:

docker run --rm --name impexp \
    -v /home/me/mydata/:/data \
  3dcitydb/impexp export \
    -H my.host.de -d citydb -u postgres -p changeMe! \
    -o output.gml
Importer/Exporter Docker combined with 3DCityDB Docker

This example shows how to use the 3DCityDB and Importer/Exporter Docker images in conjunction. Let’s assume we have a CityGML containing a few buildings file on our Docker host at: /d/temp/buildings.gml

First, let’s bring up a Docker network and a 3DCityDB instance using the 3DCityDB Docker images. As the emphasized line shows, we name the container citydb. You can use the LoD3 Railway dataset for testing.

docker network create citydb-net

docker run -d --name citydb \
    --network citydb-net \
    -e POSTGRES_PASSWORD=changeMe \
    -e SRID=25832 \
  3dcitydb/3dcitydb-pg:latest-alpine

The next step is to import our data to the 3DCityDB container. Therefore, we need to mount our data directory to the container, as shown in line 3. The emphasized lines show how to use the container name from the first step as hostname when both containers are attached to the same Docker network.

Note

There are many other networking options to connect Docker containers. Take a look at the Docker networking overview to learn more.

1
2
3
4
5
6
7
8
9
docker run -i -t --rm --name impexp \
    --network citydb-net \
    -v /d/temp:/data \
  3dcitydb/impexp:latest-alpine import \
    -H citydb \
    -d postgres \
    -u postgres \
    -p changeMe \
    /data/building.gml

Now, with our data inside the 3DCityDB, let’s use the Importer/Exporter to create a visualization export. We are going to export all Buildings in LoD 2 as binary glTF with embedded textures and draco compression enabled. All Buildings will be translated to elevation 0 to fit in a visualization without terrain model.

docker run -i -t --rm --name impexp \
    --network citydb-net \
    -v /d/temp:/data \
  3dcitydb/impexp:latest-alpine export-vis \
    -H citydb \
    -d postgres \
    -u postgres \
    -p changeMe \
    -l 2 \
    -D collada \
    -G \
    --gltf-binary \
    --gltf-embed-textures \
    --gltf-draco-compression \
    -O globe \
    -o /data/building_glTf.kml

The export file are now available in /d/temp.

$ ls -lhA /d/temp

drwxrwxrwx 1 theUser theUser 4.0K May  6 17:51 Tiles/
-rwxrwxrwx 1 theUser theUser 1.4K May  6 17:55 building_glTf.kml*
-rwxrwxrwx 1 theUser theUser  310 May  6 17:55 building_glTf_collada_MasterJSON.json*
-rwxrwxrwx 1 theUser theUser 3.2M May  5 16:25 buildings.gml*

As we are done now, the 3DCityDB container and the network are no longer needed and can be removed:

docker rm -f -v citydb
docker network rm citydb-net

Hint

The Importer/Exporter also serves as reference implementation for the 3D City Database. So, if you wonder how to correctly populate the database tables, it is recommended to create simple example datasets in CityGML or CityJSON format and to load them into the database using the Importer/Exporter. Afterwards, you can inspect how the data is represented in the 3D City Database schema.

Importer/Exporter plugins

The Importer/Exporter offers a plugin mechanism that supports the modular development and deployment of additional functionalities for interacting with the 3D City Database or external datasets. For instance, plugins may enable loading or extracting 3D city model content using data formats other than CityGML, CityJSON or KML/COLLADA/glTF.

Introduction to the plugin mechanism

Plugins are extensions that add additional functionality to the Importer/Exporter. They are self-contained in that one plugin cannot extend the functionality of another plugin. Therefore, plugins can be added separately to the Importer/Exporter without interdependencies.

A plugin may extend the GUI of the Importer/Exporter by providing its own user dialog that will be rendered in a separate tab on the operations window. In addition, a plugin may add new entries to the main menu bar and the preferences dialog. To restore plugin-specific settings at program startup, a plugin can choose to serialize the settings to the main config file or a plugin-specific config file. Besides GUI extensions, plugins can also provide functionality that is hooked into the main operations of the Importer/Exporter, for instance, to postprocess exported top-level features. And they can add their own commands to the CLI of the Importer/Exporter to run the plugin from the command line.

Plugin installation is simple. Just get the plugin from your vendor and put all plugin files into the plugins subfolder of the Importer/Exporter installation directory. To keep multiple plugins independent from each other, it is recommended to create a separate subfolder below plugins for each plugin. When running the Importer/Exporter, the installed plugins are automatically detected and loaded with the application. Uninstalling a plugin just requires to delete the folder containing all the plugin files from the plugins subfolder. Note that changes to the plugins subfolder require a restart of the Importer/Exporter to become effective.

Installed plugins can be enabled or disabled at runtime both using the graphical user interface of the Importer/Exporter (see Section 4.7.1) and on the command line (see Section 4.9).

The Importer/Exporter is shipped with two free and open-source plugins that can be installed during the setup process (see Section 1.2). The Spreadsheet Generator Plugin allows for exporting attributes of city objects as spreadsheets with user-defined formatting, either to a CSV or a Microsoft Excel file (see Section 5.2). The ADE Manager Plugin automatically transforms CityGML ADEs to relational schemas extending the 3DCityDB schema and un-/registers such ADE schemas with existing 3DCityDB instances.

You can also develop your own plugins. For this purpose, the Importer/Exporter comes with a Plugin API that is available from the JAR file impexp-core-{version}.jar. Simply put this JAR file on your classpath to start plugin development. The source codes of the Spreadsheet Generator Plugin and ADE Manager Plugin can be used as templates for your own developments.

Spreadsheet Generator Plugin

Note

This is the documentation of the Spreadsheet Generator Plugin version 4.0.

The Spreadsheet Generator plugin allows you to export attribute data of the city objects stored in the 3D City Database in tabular form as comma-separated values (CSV) or Microsoft Excel (XLSX) file. The tabular data can be loaded with typical spreadsheet applications like Microsoft Excel or Open Office Calc for further processing or uploaded to online spreadsheet services such as Google Docs. In the latter case, the attributes can be accessed, for instance, from web-based viewers to show or even modify them when interacting with the city objects. As for example, the 3D Web Map Client shipped with the 3D City Database can be easily linked to online spreadsheets that have been created with this plugin (see Section 6 for more information).

The Spreadsheet Generator plugin provides a graphical user interface (GUI) that is added as Table Export operation tab to the GUI of the Importer/Exporter. The plugin can also be used on the command line. For this purpose, it adds the command export-table to the command-line interface (CLI) of the Importer/Exporter. The following sections cover the installation and use of the plugin.

Installation

The Spreadsheet Generator plugin is packaged with the Importer/Exporter installer. When using the GUI-based setup wizard for installation (see Section 1.2.1), you can simply select the plugin from the list of available software packages as shown below.

_images/impexp_plugin_spshg_installation_fig.PNG

Installation wizard of the Importer/Exporter tool.

If you have not installed the plugin together with the Importer/Exporter, it is also possible to install it at any later time with the following steps:

  1. Download the Spreadsheet Generator plugin as ZIP file from the official 3D City Database website at https://www.3dcitydb.org or from the releases section of the GitHub repository used for maintaining the plugin at https://github.com/3dcitydb/plugin-spreadsheet-generator.

    Caution

    Make sure the version of the Spreadsheet Generator plugin that you want to download can be used together with the version of your Importer/Exporter installation.

  2. Open the plugins folder within the installation directory of your Importer/Exporter and unzip the ZIP file of the Spreadsheet Generator plugin there. If the plugins folder does not exist, then create it first. After unzipping, a new subfolder plugin-spreadsheet-generator should have been created containing all files required by the plugin.

  3. Run the Importer/Exporter. The Spreadsheet Generator plugin should be automatically detected and loaded.

If you have successfully installed the plugin, the Table Export operation tab is available on the operations window of the Importer/Exporter as illustrated below. In addition, the command export-table is added to the command-line interface of the Importer/Exporter.

_images/plugin_spreadsheet_main_gui.png

The “Table Export” operation tab of the Spreadsheet Generator plugin.

Export tabular data

Rules for column expressions

The Spreadsheet Generator plugin uses simple expressions to reference the columns of the 3DCityDB database schema that shall be exported to the output CSV/XLSX file. The syntax of these expressions is based on the balloon content language used by the visualization export operation (see Section 4.6.8).

If you want to create your column expressions manually rather than using the graphical user interface of the plugin, you must comply with the following rules.

  • Expressions are coded in the form "TABLE/[AGGREGATION FUNCTION] COLUMN [CONDITION]". The table and column tokens are mandatory and must exist in the 3DCityDB schema (see Section 3.2.16.1). Both the aggregation function and the condition are optional. When present they must be written in square brackets. The expressions are mapped to SQL statements of the form: SELECT (AGGREGATION FUNCTION) COLUMN FROM TABLE (WHERE CONDITION).
  • Expressions are case-insensitive.
  • Each expression will only return those entries relevant to the city object being currently exported. For this purpose, a filter condition of the form "TABLE.CITYOBJECT_ID = CITYOBJECT.ID" is always added automatically to each expression and, thus, is not required to be added by the user manually.
  • Results will be returned as comma-separated list. A list can be empty or contain one ore more items satisfying the expression. When only interested in the first entry in a list, the aggregation function FIRST can be used. Other possible aggregation functions are LAST, MAX, MIN, AVG, SUM and COUNT.
  • A condition can simply be an index to access a specific entry from the result list. Alternatively, it can be a comparison expression involving a column name, a comparison operator and a value. For instance: [2] or [NAME = 'abc'].
  • Invalid results will be silently discarded.
  • Multi-line values are supported for the columns in the output file. Use "[EOL]" to add a break line to the expression.
  • Expressions can be surrounded by static strings that are exported as-is.

Examples

ADDRESS/STREET

The above expression returns the value of the STREET column of the ADDRESS table for each city object, for example:

Unter den Linden

If the city object is assigned multiple addresses, the ADDRESS table will contain more than one row for this city objects. In such cases, all values of the STREET column are returned as comma-separated list.

Unter den Linden, Friedrichstraße

If you want to avoid comma-separated lists as return value, you can additionally use an aggregation function for the column values as shown below.

ADDRESS/[FIRST]STREET

This expression will only return the first STREET value for the city object independent of how many addresses the city object has.

ADDRESS/[FIRST]STREET ADDRESS/[FIRST]HOUSE_NUMBER, [EOL] ADDRESS/[FIRST]ZIP_CODE ADDRESS/[FIRST]CITY

The above snippet combines multiple expressions that are mapped to a single value in the output CSV/XSLX file. The [EOL] token adds a line break to the value. Note the use of the comma , that is added as static content to every value. The return value for a given city object might look as follow:

Unter den Linden 135,
10623 Berlin
CITYOBJECT_GENERICATTRIB/ATTRNAME

Return the names of all generic attributes assigned to a city object. The names will be returned as comma-separated list.

CITYOBJECT_GENERICATTRIB/REALVAL[ATTRNAME = 'SOLAR_SUM_INVEST'] EUR

Return the value of the REALVAL column of the generic attribute whose ATTRNAME is equal to SOLAR_SUM_INVEST. The string EUR is added to the number as static content. This expression might produce the following result.

23,000.00 EUR
Rules for template files

A template file defines the layout and content for the output CSV/XSLX file generated by the Spreadsheet Generator plugin. The easiest way to create a template file is to use the graphical user interface of the plugin as described in Section 5.2.2. If you want to create a template manually instead, you must comply with the following simple rules.

  • A template file is a plain-text file.
  • Each line of a template file shall either define a column for the output file or contain a comment. Blank lines are not allowed.
  • A column shall be specified as [Title:]Content.
    • The optional Title defines the column title that shall be used as header information for the resulting output file. If provided, the title must be separated from the Content using a single : colon. Otherwise, a default title will be automatically generated.
    • The mandatory Content defines the value that shall be stored in this column for each record of the output file. The content can be an expression referencing a database column or a static value or a combination of both. Please refer to Section 5.2.2.1 for how to write column expressions.
  • Comment lines must start with // or ; as comment marker. They are ignored during export.

Note

For every city object, its object identifier stored in the GMLID column of the CITYOBJECT table will always be exported as first column of each record in the output file. The name of this first column is always “GMLID”. Thus, there is no need to define your own column for the GMLID value.

The following snippet shows an example template file.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
// This is a template file for the export of tabular attribute data.
// Lines starting with // or ; are comments and will be ignored.
;
Street:ADDRESS/[FIRST]STREET
Houseno:ADDRESS/[FIRST]HOUSE_NUMBER
City:ADDRESS/[FIRST]CITY
Address:ADDRESS/[FIRST]STREET, ADDRESS/[FIRST]HOUSE_NUMBER[EOL]ADDRESS/[FIRST]CITY
;
// Investment required for equipping the city object with solar panels
Investment:CITYOBJECT_GENERICATTRIB/REALVAL[ATTRNAME = 'SOLAR_SUM_INVEST'] EUR

When using this sample template file in an export operation, a resulting output file might look like the one illustrated in Fig. 5.3.

_images/impexp_plugin_spshg_example_exported_table_fig.png

Example output of the table export operation using the sample template presented above.

Feature version filter

In both CityGML and CityJSON, the temporal creationDate and terminationDate attributes can be used to represent different versions of the same feature that are valid at different points in time. The 3D City Database allows for storing multiple versions of the same feature to enable object histories. The timestamps are stored in the CREATION_DATE and TERMINATION_DATE columns of the CITYOBJECT table.

Using the feature version filter, a user can choose which version of the top-level features should be selected in a table export operation.

_images/impexp_plugin_spshg_feature_version_filter.png

Feature version filter for table export operations.

The different feature version options available from the drop-down list are described below.

Overview of the different feature version options
Feature Version
Description
Latest version
Selects top-level features that are not marked as terminated in the database and, thus, whose TERMINATION_DATE attribute is null.
Valid version
Selects top-level features that were valid at a given timestamp or for a given time range. The filter is evaluated against the CREATION_DATE and TERMINATION_DATE attributes.
Terminated version
Selects only terminated top-level features. You can choose to either select all terminated features or only those that were terminated at a given timestamp. The filter is evaluated against the TERMINATION_DATE attribute.

For example, you can use Valid version to export attributes from a past status of your 3D city model (e.g., at March 1st, 2018) and compare them to the current version.

Note

For the feature version filter to work correctly, you must make sure that the validity times of subsequent feature versions do not overlap. The Importer/Exporter does not provide specific tools for managing feature versions in the database.

Hint

If your 3D City Database does not contain multiple feature versions, you should always disable the feature version filter to avoid unnecessarily complex SQL queries.

Attribute filter

The attribute filter lets you define values for the object identifier, gml:name and citydb:lineage, which must be matched by a top-level feature to be exported.

_images/impexp_plugin_spshg_attribute_filter.png

Attribute filter for table export operations.

More than one identifier can be provided in a comma-separated list. Multiple gml:name and citydb:lineage values are not supported though.

Both the gml:name and citydb:lineage search strings support two wildcard characters: “*” representing zero or more characters and “.” representing a single character. You can use the escape character “" to escape the wildcards. For example, if you provide *abc for the gml:name filter, then features with a gml:name of “xyzabc” and “abc” will both be exported. If you enter \*abc instead, the gml:name must exactly match “*abc” for the feature to be exported.

SQL filter

The SQL filter offers a powerful way to query top-level features based on a user-defined SELECT statement.

_images/impexp_plugin_spshg_sql_filter.png

SQL filter for table export operations.

The SQL query is entered in [1]. The + and - buttons [2] on the right side of the input field allow for increasing or reducing the size of the input field.

In general, any SELECT statement supported by the underlying database system can be used as SQL filter. The query may operate on all tables and columns of the database and may involve any database function or operator. The SQL filter therefore provides a high degree of flexibility for querying content from the 3DCityDB based on your filter criteria.

The only mandatory requirement is that the SQL query must return a list of database IDs of the selected city objects. Put differently, the result set returned by the query may only contain a single column with references to the ID column of the CITYOBJECT table. The name of the result column can be freely chosen, and the result set may contain duplicate ID values. Of course, it must also be ensured that the SELECT statement follows the specification of the database system.

The following example shows a simple query that selects all city objects having a generic attribute of name energy_level with a double value less than 12.

select
    cityobject_id
from
    cityobject_genericattrib
where
    attrname='energy_level' and realval < 12

The CITYOBJECT_ID column of CITYOBJECT_GENERICATTRIB stores foreign keys to the ID column of CITYOBJECT. The return set therefore fulfills the above requirement.

Note that you do not have to care about the type of the city objects belonging to the ID values in the return set. Since the SQL filter is evaluated together with all other filter settings on the Export tab, the export operation will automatically make sure that only top-level features in accordance with the feature type filter are exported. For example, the above query might return ID values of buildings, city furniture, windows or traffic surfaces. If, however, only buildings have been chosen in the feature type filter, then all ID values in the result set not belonging to buildings will be ignored. This allows writing generic queries that can be reused in different filter combinations. Of course, you may also limit the result set to specific city objects if you like.

The following example illustrates a more complex query selecting all buildings having at least one door object.

select
     t.building_id
from
     thematic_surface t
inner join
     opening_to_them_surface o2t on o2t.thematic_surface_id = t.id
inner join
     opening o on o.id = o2t.opening_id
where
     o.objectclass_id = 39
group by
     t.building_id
having
     count(distinct o.id) > 0

Caution

Other statements than SELECT such as UPDATE, DELETE or DDL commands will be rejected and yield an error message. However, in principle, it is possible to create database functions that can be invoked with a SELECT statement and that delete or change content in the database. An example are the DELETE functions offered by the 3DCityDB itself (cf. Section 3.5.8). For this reason, the export operation scans the SQL filter statement for these well-known DELETE functions and refuses to execute them. However, similar functions can also be created after setting up the 3DCityDB schema and thus are not known to the export operation a priori. If such functions exist and a user of the Importer/Exporter shall not be able to accidentally invoke them through an SQL query, it is strongly recommended that the user may only connect to the 3DCityDB instance via a read-only user (cf. Section 3.4.2).

Bounding box filter

The bounding box filter takes a 2D bounding box as parameter that is given by the coordinate values of its lower left (xmin, ymin) and upper right (xmax, ymax) corner. It is evaluated against the ENVELOPE column of the CITYOBJECT table.

_images/impexp_plugin_spshg_bbox_filter.png

Bounding box filter for table export operations.

All top-level features whose envelopes overlap with the provided bounding box will be exported.

Similar to the CityGML/CityJSON export operation, the coordinate values of the bounding box filter can either be entered manually or chosen interactively in a 2D map window. To open the map window, click on the map button map_select. A comprehensive guide on how to use the map window is provided in chapter Section 4.8.

Note

When choosing a spatial bounding filter, make sure that spatial indexes are enabled (use the index operation on the Database tab to check the status of indexes, cf. Section 4.3.3).

Note

If the entire 3D city model stored in the 3DCityDB instance shall be exported with tiling enabled, then a bounding box spanning the overall area of the model must be provided. This bounding box can be easily calculated on the Database tab (cf. Section 4.3.2).

Feature type filter

With the feature types filter, you can restrict the export to one or more features types by enabling the corresponding checkboxes. Only features of the selected type(s) will be exported.

_images/impexp_plugin_spshg_feature_type_filter.png

Feature type filter for table export operations.

The feature type filter only shows top-level feature types.

Caution

The Spreadsheet Generator plugin does not support CityGML ADE extensions. Thus, even if you have registered an ADE extension with the 3D City Database and the Importer/Exporter, the feature type filter will not automatically contain feature types from the corresponding CityGML ADE.

The Spreadsheet Generator plugin adds the Table Export tab to the operations window of the Importer/Exporter to export attributes of the city objects stored in the 3D City Database in tabular form. The following figure shows the content of the Table Export tab.

_images/plugin_spreadsheet_gui.png

The table export dialog.

Output file selection

At the top of the export dialog, the target output file must be specified [1]. You can choose to either export the attribute data as CSV or as Microsoft Excel (XLSX) file. Enter the output file manually or open a file selection dialog via the Browse button. When selecting CSV as output format, the delimiter to be used for separating the values in the CSV file can be additionally defined with the corresponding drop-down list. By default, a comma , is used as delimiter.

Columns to export

The Columns section of the Table Export operation tab lets you define the attributes that shall be exported as separate columns of the output file [2]. You can load an existing template file that defines the table layout to be used in the export. Simply enter the path to the template file or use a file selection dialog by clicking on the Browse button. Alternatively, click on the New button if you want to define a new table layout for the export instead. This will bring up an additional user dialog as illustrated below. The same dialog is shown if you load a template file but want to adapt it before exporting by pushing the Edit button.

_images/impexp_plugin_spshg_new_template_fig.PNG

Dialog to create a new table layout or edit an existing one loaded from a template file.

The columns to be exported are listed as separate entries of the table in Fig. 5.10. Each column has a Column title that is used as header information for the column in the output file. The Column content defines the feature attribute that shall be mapped to this column. You can also add an optional Comment that, however, is just meant for documentation and will not be exported to the output file. The table is automatically populated with the content of the template file or will be empty in case no template file was loaded.

During an export, the table export operation iterates over all city objects to be exported. For each city object, a new record is created in the output file that consists of the columns defined in the above table layout. The values of the attributes chosen as Column content are queried from the database and written to the corresponding fields of the record.

Note

For every city object, its object identifier stored in the GMLID column of the CITYOBJECT table will always be exported as first column of each record in the output file. The name of this first column is always “GMLID”. Thus, there is no need to define your own column for the GMLID value.

The columns are written to a record in the same order as they are listed in the table layout. Select an entry in the table and use the Up (▲) and Down (▼) buttons on the right of the table to change its position in the list. Click on Remove to delete the selected entry from the table layout.

Add a new column or edit an existing one

You can add a new column to the output file by pushing the Add button next to the table in Fig. 5.10. This will open the following dialog window.

_images/impexp_plugin_spshg_new_column_fig.PNG

Add a new column to the table layout for the output file.

Enter the column title, content and an optional comment into the corresponding input fields. Both the title and the content are mandatory. If you leave the title field empty, a default value as combination of the table and column name in the database is chosen.

To pick an attribute from the database as content for the column, you can use the tree view called Available data from database on the left of the dialog. This view lists all predefined tables of the 3DCityDB schema as nodes of the tree. When clicking on one of the nodes, the columns available for the corresponding table are shown. Select the database column you want to export and click the > button in the middle of the dialog to accept it as content for the column in the output file. The content field is automatically populated with an expression that references the database column. The syntax of this expression is based on the balloon content language used by the visualization export operation and explained in more detail in Section 5.2.2.1. The expression is dynamically evaluated during export and the value stored in the referenced column for a given city object is written to the output file.

Note

The tree view will also list tables and columns of CityGML ADEs if a corresponding ADE extension has been registered with the 3D City Database and the Importer/Exporter.

If a database table contains multiple entries for the same city object, then the different values for the chosen column are exported as comma-separated list to the output file. If you rather prefer a single value, you can use one of the aggregation functions MAX, MIN, AVG, COUNT, SUM, FIRST, or LAST. Simply click on the f(x)▼ button instead of > to copy a database column into the Column content field and pick the function you want from the choice list. Note that the expression now also contains the chosen aggregation function. Multi-line content is also supported for a column in the output file. Click on the EOL (end of line) button to add a break line to the column content. The break lines is encoded as [EOL] in the expression.

You can also enter the column content manually into the corresponding input field. Instead of a dynamic expression, you can also provide a static value that shall be exported as-is for every record in the output file. For example, assume you want to have a field called CITY in the output file that has the value Munich for every record. Then simply enter Munich into the Column content field.

Note

The Column content field may contain a combination of dynamic expressions and static values, possible separated by break lines.

The comment input field is optional and can be left empty. When clicking on the Insert column button at the bottom of the dialog window, the newly defined column will be carried to the table of columns for the output file (see Fig. 5.10). When you select an entry from this table and push the Edit button, you can edit the title, content and comment for this column using the same dialog window as shown in Fig. 5.11.

Save template file

If you want to reuse a table layout for multiple exports, you can save it as template file using the Save button as shown in Fig. 5.10. Use the Load button to restore the template file when you use the table export operation the next time. You can also create template files manually without using the Table Export user interface. Please stick to the rules for template files as discussed in Section 5.2.2.2 in this case.

Note

The Spreadsheet Generator plugin is shipped with example template files that are available from the templates folder within the installation directory of the plugin.

Export filters

Similar to the CityGML/CityJSON export process, the table export operation offers thematic and spatial filters to restrict an export to a subset of the 3D city model content stored in the database [3]. The following filters are available and discussed in separate sections of this chapter:

To enable a filter, simply select its checkbox. This will automatically make the filter dialog visible. Make sure to provide the mandatory input for the filter to work correctly. If more than one filter is enabled, the filters are combined in a logical AND operation, i.e. all filter criteria must be fulfilled for a city object to be exported. If no checkbox is enabled, no filters are applied and, thus, all features contained in the database will be exported.

Note

All export filters are only applied to top-level features but not to nested sub-features.

Starting the export process

Once all settings are complete, the table export is triggered with the Export button [4] at the bottom of the dialog (cf. Fig. 5.9). If a database connection has not been established manually beforehand, the currently selected entry on the Database tab is used to connect to the 3D City Database. The separate steps of the export process as well as all errors and warnings that might occur during the export are reported to the console window. The overall progress is shown in a separate status window. This status window also offers a Cancel button to abort the export process at any time.

Command-line interface

Synopsis

impexp export-table [-hV] [--ade-extensions=<folder>] [-c=<file>]
                    [-D=<char>] -l=<file> [--log-file=<file>]
                    [--log-level=<level>] -o=<file> [--pid-file=<file>]
                    [--plugins=<folder>] [--use-plugin=<plugin
                    [=true|false]>[,<plugin[=true|false]>...]]... [[[-t=<
                    [prefix:]name>[,<[prefix:]name>...]]...
                    [--namespace=<prefix=name>[,<prefix=name>...]]...]
                    [[-r=<version>] [-R=<timestamp[,timestamp]>]]
                    [-i=<id>[,<id>...] [-i=<id>[,<id>...]]...] [-b=<minx,
                    miny,maxx,maxy[,srid]>] [-s=<select>]]
                    [[-T=<database>] -H=<host> [-P=<port>] -d=<name>
                    [-S=<schema>] -u=<name> [-p[=<password>]]]
                    [@<filename>...]

Description

The export-table command exports attributes of the city objects stored in the 3D City Database in tabular form. It corresponds to the table export operation offered on the Table Export tab of the graphical user interface (see Section 5.2.2). The command provides a range of options to adapt the export process. In addition, you can also use the global options that are available for all commands of the Importer/Exporter command-line interface (see Section 4.9).

General options

-o, --output=<file>

Specify the output file to use for storing the exported attribute data. Use .csv as file extension to export the data as comma-separated values (CSV) file, which is also the default output format. Alternatively, you can export the data as Microsoft Excel (XLSX) file by choosing .xslx as file extension.

-l, --template=<file>

Provide the template file to use for the export. The template file defines the layout and content for the output file. See Section 5.2.2.2 for more information.

-D, --delimiter=<char>

Delimiter to use for separating values in the output CSV file. By default, a comma , is used as delimiter. This option is ignored when exporting as XLSX file.

Query and filter options

The export-table command offers additional options to define both thematic and spatial filters that are used to restrict the export to a subset of the top-level city objects stored in the 3D City Database.

-t, --type-name=<[prefix:]name>[,<[prefix:]name>...]

Comma-separated list of one or more names of the top-level feature types to be exported. The type names are case sensitive and shall match one of the official CityGML feature type names. To avoid ambiguities, you can use an optional prefix for each name. The prefix must be associated with the official XML namespace of the feature type. You can either use the official CityGML namespace prefixes listed in Table 4.11. Or you can use the --namespace option to declare your own prefixes.

--namespace=<prefix=name>[,<prefix=name>...]

Used to specify namespaces and their prefixes as comma-separated list of one or more prefix=name pairs. The prefixes can be used in other options such as --type-name.

-r, --feature-version=<version>

Specify the version of the top-level features to use for the export. Allowed values are latest, at, between, terminated, terminated_at and all. When choosing latest, only those features that have not been terminated in the database are exported, whereas all will export all features. You can also choose to export only features that were valid at a given timestamp using at or for a given time range using between. Likewise, terminated will return all terminated features whereas terminated_at will select features that were terminated at a given timestamp. In all cases, timestamps must be provided using the --feature-version-timestamp option. Further details about the feature version filter are available in Section 5.2.2.3.

-R, --feature-version-timestamp=<timestamp[,timestamp]>

One or two timestamps to be used with the --feature-version option. A timestamp can be given as date in the form YYYY-MM-DD or as date-time specified as YYYY-MM-DDThh:mm:ss[(+|-)hh:mm. The date-time format supports an optional UTC offset. Use one timestamp with the at and terminated_at values and two timestamps separated by comma with the between value of the --feature-version option.

-i, --resource-id=<id>[,<id>...]

Comma-separated list of one or more identifiers. Only top-level features having a matching value for their identifier attribute will be exported.

-b, --bbox=<minx,miny,maxx,maxy[,srid]>

2D bounding box to use as spatial filter. The bounding box is given by four coordinates that define its lower left and upper right corner. By default, the coordinates are assumed to be in the same CRS that is used by the 3DCityDB instance. Alternatively, you can provide the database srid of the CRS associated with the coordinates as fifth value (e.g. 4326 for WGS84). All values must be separated by commas. The bounding box is evaluated against the GMLID column of the CITYOBJECT table.

-s, --sql-select=<select>

Provide an SQL SELECT statement to be used as SQL filter when querying the database. In general, any SELECT statement can be used as long as it returns a list of database IDs of the selected city objects (see Section 4.5.3 for more information). You can also use an @-file to provide the SELECT statement (see Section 4.9.9.3).

Database connection options

The following options allow you to define the connection details that shall be used for establishing a connection to the 3D City Database. You can also use environment variables for this purpose (see Section 4.9.8).

-T, --db-type=<database>

Specify the database system used for running the 3DCityDB. Allowed values are postgresql for PostgreSQL/PostGIS databases (default), and oracle for Oracle Spatial/Locator databases.

-H, --db-host=<host>

Specify the host name of the machine on which the 3DCityDB database server is running.

-P, --db-port=<port>

Specify the TCP port on which the 3DCityDB database server is listening for connections. The default value is 5432 for PostgreSQL and 1521 for Oracle.

-d, --db-name=<name>

Specify the name of the 3DCityDB database to connect to. When connecting to an Oracle database, provide the database SID or service name as value.

-S, --db-schema=<schema>

Name of the database schema to use when connecting to the 3DCityDB. If not provided, the citydb schema is used for PostgreSQL by default, whereas the schema of the user specified by the option --db-username is used under Oracle.

-u, --db-username=<name>

Connect to the 3DCityDB database server as the user given by name.

-p, --db-password[=<password>]

Specify the password to use when connecting to the 3DCityDB database server. You can either provide the password as value for this option or leave the value empty to be prompted to enter the password on the console before connecting to the database. If you skip this option completely, the impexp tool will try to connect to the database without a password. If the database server requires password authentication and a password is not available by other means, the connection attempt will fail in this case.

Examples

$ impexp export-table -H localhost -d citydb_v4 -u citydb_user -p my_password \
                      -l my_template.txt -o my_attributes.xslx

Export attributes according to the provided my_template.txt file (see Section 5.2.2.2) for all top-level city objects stored in the database. The attribute data is stored in the my_attributes.xslx file using XLSX as output format. The 3DCityDB to connect to is supposed to be running on a PostgreSQL database on the same machine. The connection will be established to the citydb_v4 database with the user citydb_user and the password my_password.

$ impexp export-table -H localhost -d citydb_v4 -u citydb_user -p my_password \
                      -t Building -b 13.3508824,52.4799281,13.3578297,52.4862805,4326 \
                      -D ; -l my_template.txt -o my_attributes.csv

Only export attributes of Building features overlapping with the provided bounding box from the database. The coordinates of the bounding box are given in WGS84. For this reason, the fifth value 4326 of the -b option denotes the SRID that is used by the target database for the WGS84 reference system. The output format is CSV and a semicolon ; is used as delimiter.

$ impexp export -H localhost -d citydb_v4 -u citydb_user -p my_password \
                -s "select cityobject_id from cityobject_genericattrib \
                    where attrname='energy_level' and realval < 12" \
                -l my_template.txt -o my_attributes.csv

Export attributes of all city objects satisfying the given SQL SELECT statement.

Note

The Spreadsheet Generator plugin also supports exporting attributes of features defined in a CityGML ADE. For this purpose, a corresponding ADE extension must have been registered with the 3D City Database and the Importer/Exporter.

ADE Manager Plugin

Note

This is the documentation of the ADE Manager Plugin version 2.0.

The ADE Manager plugin allows you to dynamically extend a 3D City Database instance to be able to store and manage CityGML Application Domain Extension (ADE) data in the database. The plugin uses the open source Attributed Graph Grammar (AGG) transformation engine to automatically transform an XML Schema definition (XSD) of a given CityGML ADE to a compact relational database schema (including tables, indexes, constraints, etc.) that seamlessly integrates with the 3DCityDB schema.

In addition, an XML-based schema mapping file is generated by the plugin containing metadata about the derived database schema as well as the explicit mapping between the source and the target schema. Based on the relational schema for the CityGML ADE and the derived schema mapping file, developers can implement applications for managing and processing the ADE data stored in a 3DCityDB instance.

The ADE Manager plugin provides a graphical user interface (GUI) that is added as ADE Manager operation tab to the GUI of the Importer/Exporter. The following sections cover the installation and use of the plugin.

Installation

The ADE Manager plugin is packaged with the Importer/Exporter installer. When using the GUI-based setup wizard for installation (see Section 1.2.1), you can simply select the plugin from the list of available software packages as shown below.

_images/ade_manager_plugin_gui_installation.png

Installation wizard of the Importer/Exporter tool.

If you have not installed the plugin together with the Importer/Exporter, it is also possible to install it at any later time with the following steps:

  1. Download the ADE Manager plugin as ZIP file from the official 3D City Database website at https://www.3dcitydb.org or from the releases section of the GitHub repository used for maintaining the plugin at https://github.com/3dcitydb/plugin-ade-manager.

    Caution

    Make sure the version of the ADE Manager plugin that you want to download can be used together with the version of your Importer/Exporter installation.

  2. Open the plugins folder within the installation directory of your Importer/Exporter and unzip the ZIP file of the ADE Manager plugin there. If the plugins folder does not exist, then create it first. After unzipping, a new subfolder plugin-ade-manager should have been created containing all files required by the plugin.

  3. Run the Importer/Exporter. The ADE Manager plugin should be automatically detected and loaded.

If you have successfully installed the plugin, the ADE Manager operation tab is available on the operations window of the Importer/Exporter as illustrated below.

_images/ade_manager_plugin_user_interface.png

The “ADE Manager” operation tab of the ADE Manager plugin.

Using the graphical user interface

When starting the Importer/Exporter, the user interface of the ADE Manager plugin is available as additional tab called ADE Manager on the operations window.

_images/ade_manager_plugin_gui.png

The user interface of the ADE Manager plugin.

The user interface is organized into three sections offering different functionalities and operations for working with CityGML ADEs.

Functionalities offered by the ADE Manager plugin
Management operations for ADE extensions that are already registered with the 3DCityDB.
This dialog allows you to register an ADE extension package with the 3DCityDB.
With this dialog, you can automatically transform the XML Schema definition of a CityGML ADE to a relational database schema that seamlessly integrates with the 3DCityDB schema.

The separate operations are discussed in more detail in the following sections. Which operation to use depends on whether an ADE extension package already exists for the CityGML ADE you want to use.

If an ADE extension package exists, you simply have to register this ADE package [2] with your 3DCityDB instance. This process will create the required database tables, objects and functions for storing and managing ADE data in your 3DCityDB. In addition, you have to install the ADE extension package for the Importer/Exporter (see Section 5.3.6). Afterwards, you can import and export ADE datasets using the general operations of the Importer/Exporter. And you can execute ADE operations [1] like retrieving metadata about an ADE or entirely removing an ADE extension package.

The 3D City Database project offers three open source and free to use ADE extension packages:

  • Energy ADE: The Energy ADE extends CityGML by features and properties necessary to perform energy simulations and to store and exchange the corresponding results.
  • i-UR ADE: The i-UR ADE is an information infrastructure for urban revitalization and planning.
  • Test ADE: This is an artificial ADE and is only meant for testing and demonstrating the ADE support of the 3DCityDB.

If you do not have a ready-to-use ADE extension package for your CityGML ADE, the first step will rather be to transform the XML Schema [3] of the ADE to a relational schema representation that can be used with the 3DCityDB. This mapping can be done fully automatically with the ADE Manager plugin. The output of the transformation process is an ADE extension package that you can directly register [2] with your 3DCityDB. Afterwards, you can use the ADE operations [1] with your ADE extension if required.

Caution

The ADE Manager plugin only creates the database tables, objects and functions that are required for storing and managing ADE data in the 3DCityDB. You can already use this database schema to load and export ADE data with your own tools. However, if you want to use the Importer/Exporter for this purpose, you need an additional Java library that adds support for the CityGML ADE to the Importer/Exporter. This Java library is not automatically created by the ADE Manager plugin but must be developed manually.

ADE operations

The ADE Operations dialog offers the possibility to list all CityGML ADEs that have been successfully registered with a 3D City Database instance and to perform management operations for these ADE extensions.

_images/ade_manager_plugin_ade_operations.png

The ADE Operations dialog of the ADE Manager plugin.

Retrieving registered ADEs from the 3DCityDB

Simply click on the Fetch ADEs button below the ADE table to get the list of all ADEs registered with your 3DCityDB instance. If a database connection has not been established manually beforehand, the currently selected entry on the Database tab (see Section 4.3) is used to connect to the 3DCityDB. The ADE Manager plugin queries the ADE metadata tables of the 3DCityDB schema in order to retrieve the available and supported ADEs (see Section 3.2.16.2).

For example, assume that you have successfully registered the Test ADE. The Fetch ADEs operation would therefore give you the following result.

_images/ade_manager_plugin_ade_operations_result.png

Example result of the Fetch ADEs operation.

Each ADE is displayed as separate entry in the above ADE table with its Name, Description and Version. The DB Prefix column shows the name prefix that is used for every table in the 3DCityDB that belongs to this ADE. The Creation Date reflects the date and time when the ADE was added to the 3DCityDB. Whenever an ADE is registered, a unique identifier is automatically generated that is displayed in the ADE ID column. This identifier is used internally, for instance, to check whether an ADE extension for the Importer/Exporter matches an ADE schema registered in the database.

When double-clicking on an entry in the ADE table, more metadata about the ADE will be displayed in a separate window as shown below. The Status field shows whether the ADE is fully supported or some user action is required. The information dialog is identical to the one that can be requested from the Database tab (see Section 4.3.5).

_images/ade_manager_plugin_ade_operations_metadata.png

ADE metadata dialog.

Removing a registered ADE

The ADE operations dialog also allows you to remove a previously registered ADE from the 3DCityDB. Simply fetch the list of available ADEs from the database, select the ADE you want to delete from this list and click the Remove ADE button. After a confirmation prompt, the ADE is removed and the operation progress is logged to the console window.

Caution

When removing an ADE, the entire ADE database schema together with all data stored in the ADE tables will be deleted from the database. There is no way to undo this operation, so use it with care.

Generate delete and envelope scripts

The 3DCityDB is shipped with database functions to delete city objects (see Section 3.5.8) and to calculate the envelope of city objects (see Section 3.5.9). The default versions of these scripts are implemented against the predefined tables of the 3DCityDB schema and, thus, do not consider ADE tables and features. For this reason, the scripts are automatically regenerated and installed in the database by the ADE Manager plugin whenever an ADE is registered with the 3DCityDB to ensure that they also work with the newly added ADE schema.

With the Generate delete script and Generate envelope script buttons you can manually trigger the process of generating the scripts. The operation will always consider all ADEs that are registered with the 3DCityDB. It is therefore independent of whether you have selected a specific ADE in the ADE table but rather ignores this selection. The separate steps of the process are logged to the console window. The resulting script is presented in a new window like the one shown below.

_images/ade_manager_plugin_show_install_scripts.png

Dialog window for showing and installing newly generated database scripts.

At the bottom of this window you can choose to save the script to a local file. This gives developers the possibility to modify the script and to install their modified version using a database tool such as pgAdmin or SQLDeveloper. Alternatively, you can let the ADE Manager plugin install the script into the 3DCityDB instance by clicking on the Install button. This can be useful, for instance, if you have installed a modified version previously and want to restore the original version.

ADE registration

The ADE registration operation of the ADE Manager plugin allows you to register a CityGML ADE with your 3D City Database. During the ADE registration process, new and ADE-specific database objects such as feature tables, functions, sequences, and indexes are added to the existing 3DCityDB database schema. Also, the 3DCityDB metadata tables (see Section 3.2.16.2) are populated with the information about the ADE.

_images/ade_manager_plugin_ade_registration.png

The ADE Registration dialog of the ADE Manager plugin.

ADE extension package

The registration operation requires an ADE extension package as input. An ADE extension package is a collection of database scripts together with a schema mapping file organized in a predefined folder structure. This folder structure is illustrated below.

_images/ade_manager_plugin_input_folder_structure.png

Structure of an ADE extension package required for registering an ADE.

The root folder of the package must contain the two mandatory subfolders 3dcitydb and schema-mapping. The first subfolder 3dcitydb again contains two subfolders called oracle and postgreSQL that contain database-specific SQL scripts for creating and dropping the ADE schema. These files must be named CREATE_ADE_DB.sql and DROP_ADE_DB.sql.

The CREATE_ADE_DB.sql script will be executed by the ADE Manager plugin for creating the 3DCityDB compliant ADE database schema according to the database type (PostgreSQL or Oracle) being used. The SQL file DROP_ADE_DB.sql must contain the SQL statements for removing the corresponding ADE database schema. These statements are imported into the ADE metadata table of the 3DCityDB schema (see Section 3.2.16.2) during the ADE registration process and hence are persistently stored in the database. When removing an ADE using the ADE Operations dialog (see Section 5.3.3), the statements will be read from the ADE table and executed by the ADE Manager plugin.

The second subfolder schema-mapping shall contain an XML file called schema-mapping.xml. This file defines the relevant metadata about the ADE extension (e.g., name, description, XML namespace, value range of object class IDs) as well as the explicit mapping of elements of the XML Schema definition of the CityGML ADE to tables and columns in the relational database schema. This schema mapping file is not only used in the ADE registration process, but also required by most operations of the Importer/Exporter, for instance, to automatically build SQL queries against the 3DCityDB and ADE tables when exporting data.

All files and folder in an ADE extension package must adhere to the structure and naming conventions explained above to be accepted by the ADE Manager plugin. The name of the root folder of the package can be chosen freely though.

Note

The ADE transformation operation of the ADE Manager plugin automatically creates a valid ADE extension package with all required files from the XML Schema definition of a CityGML ADE. More information is provided in Section 5.3.5.

Note

A schema mapping file is also used for the predefined database schema of the 3DCityDB. It can be found here together with a corresponding XML schema definition. The impexp-core Java library of the Importer/Exporter provides an API for parsing, creating and writing valid schema mapping file (see here).

Starting the registration process

Provide the path to the root folder of the ADE extension package in the corresponding input field of the user dialog (see Fig. 5.19). You can either enter the folder manually or open a file selection dialog via the Browse button. Afterwards, click the Register ADE button to start the process. If a database connection has not been established manually beforehand, the currently selected entry on the Database tab is used to connect to the 3D City Database. The separate steps of the registration process as well as all errors and warnings that might occur during the registration are reported to the console window.

Note

The registration process cannot be aborted by the user as this might lead to an inconsistent database state. However, you can remove a registered ADE at any time using the ADE Operations dialog (see Section 5.3.3).

Caution

The registration operation only creates the database tables, objects and functions that are required for storing and managing ADE data in the 3DCityDB. You can already use this database schema to load and export ADE data with your own tools. However, if you want to use the Importer/Exporter for this purpose, you need an additional Java library that adds support for the CityGML ADE to the Importer/Exporter. This Java library is not automatically created by the ADE Manager plugin but must be developed manually.

Example

If you want to test the ADE registration process, you can use the open source Test ADE for this purpose. The TestADE is an artificial CityGML ADE for testing and demonstrating the ADE support of the 3D City Database. Download the Test ADE extension as ZIP file from the releases section of the GitHub repository at https://github.com/3dcitydb/extension-test-ade. Make sure you download a version that can be used together with the version of your Importer/Exporter.

The ZIP file contains the ADE extension package of the Test ADE. Unzip the file to a folder of your choice in your local file system and make sure that the folder has the required content as discussed above. Afterwards, simply provide this folder as input for the ADE extension package field in the user dialog and click the Register ADE button.

_images/ade_manager_plugin_gui_ade_registration.png

Registering the Test ADE extension package.

During the ADE registration process, the database schema of the Test ADE will be created and the metadata about the ADE will be written to the 3DCityDB metadata tables. In addition, the 3DCityDB database functions for deleting city objects (see Section 3.5.8) and calculating the envelope of city objects (see Section 3.5.9) will be automatically regenerated by the ADE Manager plugin to account for the new ADE tables and features.

After the Test ADE has been successfully registered, the list of all ADEs registered in the 3DCityDB instance is updated and displayed in the ADE table of the ADE Operations dialog (see Section 5.3.3). Make sure the Test ADE is listed here.

_images/ade_manager_plugin_ade_operations_result.png

The Test ADE is listed as registered ADE after the registration operation.

You may also use a database tool like pgAdmin (PostgreSQL) or SQLDeveloper (Oracle) to check whether the ADE database schema has been correctly created. For the Test ADE, the 3DCityDB schema should now contain additional tables starting with the prefix “test_” that are used to store Test ADE data. In addition, there should be new database functions to delete Test ADE features (with prefix “del_test_”) and to calculate their envelope (with prefix “env_test_”).

_images/ade_manager_plugin_tables_pgadmin.png

The Test ADE tables starting with the prefix “test_” shown in pgAdmin.

ADE transformation

Adding tagged values to the XML Schema

One typical issue in the transformation of the XML Schema of a CityGML ADE is that relevant information about the ADE data model is often missing in the XML Schema and, thus, cannot be considered for the derivation of the database schema.

For example, if you use UML as data modelling tool for your ADE, the information whether an association is modelled as composition or aggregation in UML cannot be expressed in XML Schema and is therefore typically lost when deriving the XML Schema from the UML using standard UML-to-GML tools like ShapeChange. The type of association is, however, important for creating the database schema (e.g., for deciding where to put foreign keys, whether to use n:m relationship tables, etc.) and for deriving delete scripts that not only delete a single ADE feature but also all of its nested subfeatures stored in different tables. Some information like whether or not an ADE feature is a top-level feature cannot be represented in UML or XML Schema at all.

To address these issues, you can annotate the XML Schema of the CityGML ADE with the missing information. For this purpose, the ADE Manager plugin defines and supports a set of so-called tagged values that can be added to the elements of the XML Schema as application-specific metadata using <xs:annotation> tags. A tagged value is a name-value pair that is encoded as <taggedValue tag="name">value</taggedValue> in XML. If you use UML for modelling your ADE, you can even add the tagged values directly to the UML constructs and they will be automatically carried to <xs:annotation> elements in the XML Schema when using tools like ShapeChange.

The following tagged values are supported by the ADE transformation operation.

Tagging top-level ADE features
Tagged value
topLevel (true | false)
Description
This tagged value allows for defining whether an ADE feature is top-level
Example use in XML-Schema
<element name="IndustrialBuilding"
  substitutionGroup="bldg:_AbstractBuilding"
  type="TestADE:IndustrialBuildingType">
  <annotation>
    <appinfo>
      <taggedValue tag="topLevel">true</taggedValue>
    </appinfo>
  </annotation>
</element>
Tagging the multiplicity of ADE hook properties
Tagged value
minOccurs and maxOccurs (integer value | „unbounded”)
Description
The combination of the two tagged values allows for defining the
multiplicity information of each ADE hook property. In UML model, this
multiplicity information can be explicitly specified but it is lost in
the XML Schema, because every ADE hook property is hard-encoded with a
multiplicity of [0..*] in the XML Schema. Since the ShapeChange tool
up to version 2.5.1 is not able to read the multiplicity of the hook properties
from the UML model directly, the two tagged values are required although
they just replicate the information from the UML.
Example use in XML-Schema
<element name="ownerName"
  substitutionGroup="bldg:_GenericApplicationPropertyOfAbstractBuilding"
  type="string">
  <annotation>
    <appinfo>
      <taggedValue tag="maxOccurs">1</taggedValue>
    </appinfo>
  </annotation>
</element>
Tagging the relationship type between classes
Tagged value
relationType (association | aggregation | composition)
Description
An enumeration attribute allowing to distinguish the three relationships
between two associated classes. This meta-information is also lost
in the mapping UML -> XML Schema, because XML Schema does not
distinguish between the three relation types. This tagged value is also
redundant from the view of UML, but required when using ShapeChange.
Example use in XML-Schema
<element maxOccurs="unbounded" minOccurs="0" name="boundedBy"
  type="bldg:BoundarySurfacePropertyType">
  <annotation>
    <appinfo>
      <taggedValue tag="relationType">composition</taggedValue>
    </appinfo>
  </annotation>
</element>
Tagging the LoD of geometry properties
Tagged value
lod (integer value between 0 and 4)
Description
An integer value to denote the LoD representation of the respective
geometry property. If this tagged value is not provided, the ADE manager
will check if the property name is prefixed with ‘lod’ (not case-sensitive)
and the forth character is an integer between 0 and 4. If this is true,
then this integer value will used as LoD information.
Example use in XML-Schema
<complexType abstract="true" name="_AbstractBuildingUnitType">
  <complexContent>
    <extension base="core:AbstractCityObjectType">
      <sequence>
        <element name="footprint" type="gml:MultiSurfacePropertyType">
          <annotation>
            <appinfo>
              <taggedValue tag="lod">0</taggedValue>
            </appinfo>
          </annotation>
        </element>
      </sequence>
    </extension>
  </complexContent>
</complexType>
Tagging property elements to be ignored
Tagged value
ignore (true | false)
Description
This tagged value allows for labeling selected properties that shall not be taken into account when deriving the ADE database schema and schema mapping file.
Example use in XML-Schema
<complexType abstract="true" name="_AbstractBuildingUnitType">
  <complexContent>
    <extension base="core:AbstractCityObjectType">
      <sequence>
        <element name="legacyAttr" type="string">
          <annotation>
            <appinfo>
              <taggedValue tag="ignore">true</taggedValue>
            </appinfo>
          </annotation>
        </element>
      </sequence>
    </extension>
  </complexContent>
</complexType>
Graph transformation rules

The realization of the model transformation process is mainly based on the concept of “Graph transformation” and implemented using the open source graph transformation engine Attributed Graph Grammar (AGG). The AGG transformation tool comes with a graphical editor that allows users to define an arbitrary number of graph-structured transformation rules for mapping complex object-oriented models onto a compact relational database models. The graph transformation process implemented for the ADE Manager plugin as well as the most relevant transformation rules are discussed in detail in [YaKo2017].

While developing the ADE Manager plugin, around 50 default mapping rules have been defined and tested. Using these predefined mapping rules, well-known CityGML ADEs like the Energy ADE, i-UR ADE, Noise ADE, UtilityNetwork ADE, Dynamizer ADE, IMGeo3D ADE and further custom ADEs could be successfully and correctly transformed to compact relational schemas for the 3DCityDB. Thus, typically, there is no need for users to change or customize the default rules used by the transformation operation.

If you nevertheless want to modify these default rules or even add your own additional rules, you have to build a customized version of the ADE Manager plugin. For this purpose, clone the source code of the ADE Manager plugin from its GitHub repository at https://github.com/3dcitydb/plugin-ade-manager to a folder in your local file system.

The graphical editor of the AGG tool can be started with the runnable JAR file AggV21Build.jar that can be found in the lib subfolder of the cloned repository. On most systems, double-clicking this JAR file will launch the AGG editor. If this does not work for you, you can execute the AGG editor from the command line with the following command.

$ java -jar AggV21Build.jar

Once the AGG tool has started, use File -> Open from the main menu bar of the user interface to load the default transformation rules that are used by the ADE Manager plugin. The AGG workspace file you have to load is called Working_Graph.ggx and is located in the subfolder src/main/resources/org/citydb/plugins/ade_manager/graph. Modify this file and the contained rules according to your needs.

_images/ade_manager_plugin_AGG_user_interface.png

AGG graph editor for defining model transformation rules for the ADE Manager plugin.

When you have completed the work with the AGG editor, you have to compile your customized version of the ADE Manager plugin. The plugin uses Gradle as build system. To build the plugin from source, open a terminal on your local machine and change to the folder where you have cloned the repository of the plugin. Afterwards, run the following command in this folder.

$ gradlew installDist

The script automatically downloads all required dependencies for building the ADE Manager plugin. So make sure you are connected to the internet. The build process runs on all major operating systems and only requires a Java 8 JDK or higher to run. The build process will produce the plugin software package under build/install. Simply copy the contents of this folder into the plugins folder of your Importer/Exporter installation to use the plugin.

The ADE transformation operation of the ADE Manager plugin allows you to transform the XML Schema definition of a CityGML ADE to a relational database schema that seamlessly integrates with the 3DCityDB schema. The output of the transformation operation is an ADE extension package that can be directly registered with the 3DCityDB using the ADE registration operation of this plugin (see Section 5.3.4).

_images/ade_manager_plugin_schema_transform_GUI.png

The ADE Transformation dialog of the ADE Manager plugin

Input file selection

The XML Schema file to be transformed must be provided at the top of the ADE transformation dialog [1]. Enter the path to your file manually or open a file selection dialog by clicking on the Browse button. Afterwards, push the Read XML Schema button. The transformation process will parse the schema file together with all referenced schema files (e.g. through <xs:import> tags). As a result, all XML namespaces except the default CityGML and GML namespaces found in the XML Schema file will be listed in the XML Namespace overview [2] on the left of the dialog.

Namespace selection and ADE metadata

Select the XML namespaces from the list in [2] whose elements shall be considered in the transformation process. Typically, a CityGML ADE only defines a single XML namespace. But multiple namespaces are possible and allowed, so you can also select multiple namespaces from the list. All XML elements associated with a namespace that has not been selected will be ignored in the transformation.

As next step, provide additional metadata about the ADE in the input fields on the right side of the dialog [3]. The metadata comprises the name, a short description and the version number of the ADE. Moreover, you can define the name prefix that shall be used for all tables in the resulting ADE database schema. The initial object class ID must be set to a value greater or equal than 10,000. The values 0 - 9999 are reserved for the 3DCityDB schema. You must make sure that the object class ID values used by different ADEs do not overlap. More details about the meaning of the individual metadata attributes can be found in Section 3.2.16.2.

Adapting the transformation process

The ADE Manager plugin offers two ways to customize the transformation process and result. The first way is to add tagged values to the XML Schema file. The tagged values provide additional information which is typically missing in the XML Schema but required for deriving the relational database schema. Second, you can even create your own rules for the graph transformation engine that is used in the background for the transformation process. Both options are discussed in separate sections of this chapter:

Starting the transformation process

As final step, select the output folder where the transformation result should be stored [4]. Once all transformation settings are correct, the Transform button [5] starts the transformation process (cf. Fig. 5.25). The separate steps of the transformation process as well as all errors and warnings that might occur during the transformation are reported to the console window. This process cannot be aborted by the user.

Note

The transformation operation will create a fully valid ADE extension package inside the output folder, which can directly be used to register the ADE with the 3DCityDB in a subsequent step. See Section 5.3.4 for more details.

Example

If you want to test the ADE transformation process, you can use the open source Test ADE for this purpose. The TestADE is an artificial CityGML ADE for testing and demonstrating the ADE support of the 3D City Database. The GitHub repository at https://github.com/3dcitydb/extension-test-ade contains the UML data model and the XML Schema file of the Test ADE (besides more content). Simply clone this repository and find the XML Schema file in the resources/schema folder. Alternatively, you can download the schema file from here.

Simply provide this schema file as input to the transformation operation in [1]. Choose the XML namespace “http://www.citygml.org/ade/TestADE/1.0” for the transformation from [2] and enter your metadata in [3]. The transformation process should only take a few seconds. Afterwards, register the Test ADE with the 3DCityDB using the transformation result as input.

Using an ADE with the Importer/Exporter

As mentioned in the previous sections, the ADE Manager plugin automatically transforms the XML Schema definition of a CityGML ADE to a compact relational database schema that seamlessly integrates with the 3DCityDB schema. And you can use the plugin to automatically deploy the created ADE schema in your 3DCityDB and use it for storing and managing ADE datasets with your own tools.

However, the Importer/Exporter tool cannot automatically process CityGML ADE data in a generic way. It offers a simple Java ADE API though that allows developers to implement a so-called ADE extension as Java library that will be automatically loaded by the Importer/Exporter when starting the tool. If the Importer/Exporter comes across ADE data in import or export operations, it will check whether an extension is available for this ADE. If so, the work will be delegated to this extension through corresponding method calls of the ADE API. Otherwise, the ADE data will be ignored. The ADE API of the Importer/Exporter therefore gives developers full control over how ADE data should be be managed in the ADE database schema created with the ADE Manager plugin.

Note

The ADE Manager plugin does not automatically create an ADE extension library for the Importer/Exporter. You have to manually develop this library for the ADE you want to use or download an existing ADE extension package if available. The open source Test ADE serves as example for how to implement an ADE extension against the ADE API of the Importer/Exporter. The TestADE is an artificial CityGML ADE for testing and demonstrating the ADE support of the 3D City Database.

The following brief guide explains the required steps for using the Test ADE with the Importer/Exporter.

  1. Download the latest version of the Test ADE as ZIP package from the releases section of the GitHub repository at https://github.com/3dcitydb/extension-test-ade. This ZIP package contains the Java extension library for the Importer/Exporter as well as the ADE database schema and schema mapping file for the 3DCityDB.

  2. Open the ade-extensions folder within the installation directory of your Importer/Exporter and unzip the ZIP file of the Test ADE there. If the ade-extensions folder does not exist, then create it first. After unzipping, a new subfolder extension-test-ade-{version} should have been created. Of course, you can freely choose another name for the folder.

  3. Check that the unzipped folder contains all files required by the Test ADE extension, namely the three subfolders 3dcitydb, schema-mapping and lib. Whereas the 3dcitydb and schema-mapping folders contain the files to register the ADE with the 3DCityDB (see Section 5.3.4), the lib folder contains the JAR files that implement the ADE API of the Importer/Exporter. The folder structure should look like below.

    _images/ade_manager_plugin_impexp_folder_structure.png
  4. Launch the Importer/Exporter. The JAR files in the lib folder along with the schema mapping file in the schema-mapping folder will be automatically loaded by the Importer/Exporter.

  5. Create a new 3DCityDB instance to be used in the following steps. The database should be set up with the SRID 31468 as coordinate reference system. If you need assistance in setting up a 3DCityDB, check the guide provided in Section 1.3. The test database is called TestADE in the following.

  6. Connect to your new and empty 3DCityDB instance with the Importer/Exporter. Go to the Database tab and check that the Test ADE extension has been successfully loaded. The result should look similar to the screenshot below. The check sign in the Importer/Exporter column of the ADE table indicates that the Importer/Exporter now supports the Test ADE, while the cross sign in the Database column means the the Test ADE has not been registered with the 3DCityDB yet.

    _images/ade_manager_plugin_impexp_support_status_no.png
  7. Go to the ADE Manager tab and register the Test ADE using the ADE registration operation described in Section 5.3.4. Use the extension-test-ade-{version} folder of step 2 as input for the registration operation.

  8. Reconnect the TestADE database. The ADE table on the Database tab should now show check signs for both the database and the Importer/Exporter as illustrated below.

    _images/ade_manager_plugin_impexp_support_status_yes.png
  9. Now test the Importer/Exporter ADE support by importing Test ADE datasets. You can download two free datasets at https://github.com/3dcitydb/extension-test-ade/tree/master/resources/datasets. Go to the Import tab of the Importer/Export to load both datasets into the TestADE test database. You can optionally use the feature type filter like shown below to only import top-level features defined by the Test ADE.

    _images/ade_manager_plugin_citygml_import_filter.png

    A summary of the import process is printed to the console window. The log messages should look similar to the following figure.

    _images/ade_manager_plugin_citygml_import_summary.png
  10. Go back to the Database tab and create a Database report for the TestADE database (see Section 4.3.1). Again, you should get similar numbers for the 3DCityDB abd ADE tables as shown below.

    _images/ade_manager_plugin_database_report.png
  11. Finally, export your Test ADE data again. For this purpose, go to the Export tab of the Importer/Exporter and specify an output CityGML file. Again, you can use a feature type filter to restrict the export to top-level features defined by the Test ADE. Click the Export button to start the export operation. A summary of the export process is printed to the console window and should look similar to the following figure.

    _images/ade_manager_plugin_citygml_export_summary.png

3D Web Map Client

Note

This is the documentation of the 3D Web Map Client version 1.9.

Starting from version 3.3.0, the 3DCityDB software package comes with a software package called 3DCityDB-Web-Map-Client (in this chapter we simply call it “3D web client”) acting as a web-based front-end for high-performance 3D visualization and interactive exploration of arbitrarily large semantic 3D city models. The 3D web client has been developed based on the Cesium Virtual Globe, which is an open source JavaScript library developed by Analytical Graphics, Inc. (AGI). It utilizes HTML5 and the Web Graphics Library (WebGL) as its core for hardware acceleration and provides cross-platform functionalities like displaying 3D graphic contents on the web without the needs of additional plugins.

While developing the 3D web client, various extensions have been made to the Cesium Virtual Globe in order to facilitate users to view and explore 3D city models conveniently. The major one among those extensions is that the KML/glTF models exported using the Import/Export tool can now be directly visualized along with imagery and terrain layers within a web browser using the 3D web client, which additionally can link the KML/glTF models with table data exported using the Spreadsheet Generator Plugin (SPSHG) and allows querying the thematic data of every city object. With this newly introduced 3D web client, the functionalities of the 3DCityDB now range from high-efficient storage and management of virtual 3D city models according to the CityGML standard up to high-performance visualization and exploration of them on the web.

System requirements

Since the 3D web client utilizes the WebGL-based Cesium Virtual Globe as its 3D geo-visualization engine, the hardware on which the 3D web client will be run must have a graphics card installed that supports WebGL. In addition, the web browser in use must also provide appropriate WebGL support. You can visit the following website to check whether your web browser supports WebGL or not:

http://get.webgl.org/

The 3DCityDB-Web-Map-Client has been successfully tested on (but is not limited to) the following web browsers under different desktop operating systems like Microsoft Windows, Linux, Apple Mac OS X, and even on mobile operating systems like Android and iOS.

  • Apple Safari
  • Mozilla Firefox
  • Google Chrome
  • Opera

For best viewing and interaction performance, it is recommended to use Google Chrome.

Using the 3D Web Client from the 3DCityDB homepage

If you want to try the 3DCityDB-Web-Map-Client or do not have a possibility to install it on your own web server, you can use the pre-installed version from the 3DCityDB homepage under the URL

https://www.3dcitydb.org/3dcitydb-web-map/1.9.0/3dwebclient/index.html

This is a stable link and can be used for long-time working demo links. If new versions will be released in the future, the old versions remain functional on the server and the new versions will be installed in new subfolders (i.e. next to the folder 1.9.0).

Installation and configuration

For convenient use, there is an official web link (see the link below) that can be called to directly run the 3D web client on your web browser.

https://www.3dcitydb.org/3dcitydb-web-map/1.9.0/3dwebclient/index.html

Note

The number 1.9.0 in URL denotes the version number of the 3D web client. Once the 3D web client has been upgraded in the future, this version number will be adapted to conform to the current release of the 3D web client. Web links pointing to the previous software versions will remain valid and accessible online.

The 3D web client is a static web application purely written in HTML and JavaScript and can therefore be easily deployed by uploading its files to a simple web server. A zip file for the 3D web client can be found in the installation directory of the Import/Export tool within the subfolder 3d-web-map-client or downloaded via the following GitHub link:

https://github.com/3dcitydb/3dcitydb-web-map/releases

The extracted contents of the zip file should look something like the screenshot below.

_images/webmap_content_files_fig.png

The 3D web client comes with a lightweight JavaSript-based HTTP server (the file with the name “server”) that is mainly meant to test the functionality of the 3D web client on your local machine. For running this web server, the open source JavaScript runtime environment Node.js is required to be installed on your machine. The latest version of Node.js can be download via the web link below:

https://nodejs.org/en/

Once the Node.js program has been installed, you need to open a shell environment on your operating system and navigate to the folder where the server.js file is located, then simply run the following command to launch the server:

node server.js
_images/webmap_cli_running_web_server_fig.png

Example of running the JavaScript-based web server

Now, the 3D web client is available via the URL below and its user interface should look like in the following figure:

http://localhost:8000/3dwebclient/index.html
_images/webmap_user_interface_fig.png

User interface of the 3D web client

Feature overview

Basically, the 3D web client has been developed by extending and customizing the so-called Cesium Viewer which is a composite widget shipped with Cesium and provides overall functionalities of a 3D globe such as camera control, rendering geometries and materials, animation etc. In addition, the Cesium Viewer contains a number of especially attractive widgets and plugins providing functionalities like querying of geocoding service, switching between different viewing modes (2D, 2.5D, and 3D view), and handling imagery and terrain layers, which are commonly useful for a variety of GIS applications. In addition, starting from version 1.6.0, the web client provides better support for mobile devices, such as a more compact GUI layout as well as the ability to interact with the web map in first-person view based on the user’s location in real-time. All these functionalities along with the enhanced features and functionalities developed for the 3D web client are explained in more detail below.

_images/3d_web_client_gui.png

Relevant GUI components of the 3D web client

The 3D Globe [1] is a base Cesium widget that allows the user to navigate through the Earth map by panning, moving, tilting, and rotating the camera perspective using a mouse or touchscreen. In addition, the camera perspective can also be controlled by means of the Navigation Component [2] which is an open source Cesium plugin and offers the same navigation possibilities that can be achieved with mouse or touchscreen. It consists of a group of widgets, namely a Navigator widget for controlling the camera perspective, a North Arrows widget for orienting the Earth map towards the north, and a Scale Bar for estimating the distance between two points on the ground.

The Cesium Viewer provides an especially useful built-in Toolkit [3] containing the widgets like Geocoder, HomeButton, GeolocationButton, BaseLayerPicker, and NavigationHelpButton. The view panel of Geocoder can be expanded by clicking on the button loupe_icon to display an input filed into which the user can enter either an explicit position value in the form of “[longitude], [latitude]” or an address name to search a particular location. After pressing the “Enter” key on the keyboard or clicking on the button loupe_icon, the Geocoding process will be performed using the Bing Maps Locations API according to the entered location information. Once the target location has been found, the Earth map will be automatically adjusted to the returned location and zoomed to the bounding box with the best fit with the camera perspective. For example, if you want to search the position (longitude = 11.56786, Latitude = 48.14900) where the Technical University of Munich is, the input field of the Geocoder can be filled with the text value of “11.56786, 48.14900” and the result should look like the following figure.

_images/webmap_geocoder_fig.png

Searching the main building of the Technical University of Munich by using the Geocoder widget

The HomeButton home_icon helps the user to quickly reset the camera perspective to the default status (cf. Fig. 6.3). In addition, the GeolocationButton geolocation_icon provides some geolocation-based features such as flying to the user’s current location on the 3D map and displaying the first-person view in real-time on mobile devices, which is explained in more details in Section 6.8.

In most GIS applications, the term base layer (or basemap) is generally considered as a background layer on the map using, for example, satellite imagery and terrain model, to help people to quickly identify the locations and orientations from a certain camera perspective. Per default, Cesium comes with a number of selectable imagery layers provided by different mapping services, such as Bing Maps, OpenStreetMap, ESRI Maps etc. In addition, a terrain layer so-called STK World Terrainis available for showing worldwide 3D elevation data with an average grid resolution of 30 meters.

Note

Due to changes in Cesium Terms of Service as well as the introduction of the new commercial Cesium ion platform starting from September 1 st 2018, the STK World Terrain layer is replaced by the Cesium World Terrain hosted by Cesium ion (https://cesium.com/content/cesium-world-terrain).

All these base layers (imagery and terrain layers) can be controlled by the BaseLayerPicker widget (cf. the following figure) which has a view panel for listing all the available base layers represented by their names and respective icons and allows the user to select the desired one. For example, when an icon representing the OpenStreetMap is selected, a new instance of the OpenStreetMap imagery layer will be created to replace the imagery layer that is currently in use. Similarly, the terrain layer can be independently selected and added to the Earth map to overlap with the selected imagery layer.

_images/webmap_cesium_baselayerpicker_fig.png

Per default available base layers listed in the BaseLayerPicker widget

The last widget contained within the Cesium Toolkit [3] (cf. Fig. 6.4) is the so-called NavigationHelpButton for showing brief instructions on how to navigate the Earth map with mouse (typically for desktop and laptop PCs) and touchscreen (typically for smart phones and tablet PCs). By clicking on the question_mark_icon button, the corresponding view panel (cf. the following figure) will be shown on the upper-right corner of the 3D web client.

_images/webmap_navigation_help_fig.png

The NavigationHelpButton widget showing the instructions for navigating Earth map

The next widget is the so-called CreditContainer [4] (cf. Fig. 6.4) which displays a collection of credits with respect to the software and data providers that have been involved in the development and use of the 3D web client. These credits mainly include the mapping services (depending on the selected base layer, e.g. Bing Maps), the 3D geo-visualization engine (Cesium Virtual Globe), and the development provider of the 3D web client (3DCityDB), which are all represented with their icons, descriptions, and hyperlinks referencing to their respective homepages.

The majority of the functionalities specially provided by the 3D web client are controlled by the Toolbox widget [5] (cf. Fig. 6.4) which is an extended module based on the Cesium Viewer for integrating and controlling the user-provided data in different formats, namely KML/glTF modes, thematic data (online spreadsheet), Web Map Service (WMS) data, and digital terrain model (DTM) on the one hand. On the other hand, the user interaction with 3D city models can also be aided by this Toolbox widget which allows, for example, deselecting, shadowing, hiding and showing 3D objects, as well as exploring them from different view perspectives using third-party mapping services like Microsoft Bing Maps with oblique view, Google Streetview, and a combined version (DualMaps).

Note

Starting from September 2018, a Cesium ion API key or a Bing Maps API key is required in order to provide access to the Cesium World Terrain as well as the Bing Maps Services. These can be given as the parameter ionToken=<your_ion_token> and bingToken=<your_bing_token> in the client’s URL. If no valid token is present, Open Street Map shall be selected as the default imagery and Nominatim shall be activated as the default geocoder. For more information, please refer to:

The visualization of the 3D city model with large data size often result in significant performance issue in most 3D web applications. In order to overcome this troublesome issue, a tiling strategy has been implemented within the 3D web client to support for efficient displaying of large pre-styled 3D visualization models in the form of tiled datasets exported from the 3DCityDB by using the KML/COLLADA/glTF Exporter. This tiling strategy utilizes the multi-threading capabilities of HTML5, so that the time-costly operations such as parsing of multiple 3D objects can be delegated to a background thread running in parallel. At the same time, for data layer, another thread monitors the interactions with the virtual camera and takes care of determining which the data tiles should be loaded and unloaded according to their current visibility and the display size on the screen. Moreover, this tiling strategy supports caching mechanism allowing the data tiles loaded from an earlier computation to be temporarily stored in a cache, from which the data tiles can be loaded and rendered much faster than reloading them again from the remote server. Of course, a larger number of cached data tiles will consume more memory and may cause a memory overflow of the web browser. In order to avoid this, the 3D web client provides a so-called Status Indicator widget [6] (cf. Fig. 6.4) which can display the real-time status of the amount of showed and cached data tiles and can be used to help the user to conveniently monitor and control the memory consumed by the 3D web client.

While streaming the tiled 3D visualization models, each data tile requires at least an asynchronous HTTP (Hypertext Transfer Protocol) request (AJAX) to fetch the corresponding KML/glTF files from the remote data server. This server must support CORS (Cross-Origin Resource Sharing) to get around the cross-domain restrictions.

Note

Alternatively, the open specification Cesium 3D Tiles can also be employed to stream massive heterogeneous 3D geospatial datasets. This is supported in 3DCityDB Web Map Client version 1.6.0 or later.

Enriching KML/glTF models with thematic data

As mentioned before, the 3D web client extends the Cesium Virtual Globe to support efficient displaying, caching, dynamic loading and unloading of large pre-styled 3D visualization models in the form of tiled KML/glTF datasets exported the 3DCityDB using the KML/COLLADA/glTF Exporter. However, there is a major problem regarding the graphical visualization of semantic 3D city models as their attribute information is completely or partly lost in the 3D graphics formats. This issue has been considered and solved within the 3D web client by supporting the explicit linking of the 3D visualization models with their thematic data, which can be achieved using (1) a Google Spreadsheet stored in the cloud, or (2) PostgREST, a RESTful API for PostgreSQL.

Storing thematic data in Google Spreadsheets

The thematic data stored in the 3DCityDB can be exported to a single table (as a CSV .csv or an MS Excel .xlsx file) using the Spreadsheet Generator Plugin (SPSHG) explained in Section 5.2. This can then be uploaded in Google Drive as Google Spreadsheets.

This strategy can therefore offer the possibilities for collaborative and interactive data exploration of semantic 3D city models by means of querying the thematic data of the selected city object. The corresponding system architecture is illustrated in the following figure.

_images/3d_web_map_overview.png

Coupling an online spreadsheet with a 3D visualization model (i.e. a KML/glTF visualization model) in the cloud [HeNK2012]

_images/webmap_example_online_spreadsheet_fig.png

Example of an online spreadsheet

Publishing thematic data using RESTful API

Alternatively, the thematic data can be published directly from the 3DCityDB using a RESTful API that is supported by the employed database (such as PostgREST for PostgreSQL).

The RESTful API allows publishing database tables or SQL views on the internet. This approach therefore provides developers full control over: (1) where the resources are being hosted (i.e. independent from third-party cloud service providers), (2) which resources should be published, and (3) who can have access to which resources.

This approach requires however a good understanding of the 3DCityDB schemata (see Section 3.2) as well as familiarity with SQL in general (in order to define custom SQL views to publish directly from the database). For PostgREST, a very good tutorial is available.

Collaborative editing of the published thematic data is theoretically possible, it depends however greatly on the implementation of the employed RESTful services on the database side.

Structure of the tables containing thematic data

Since Google Fusion Tables was shut down on Dec 3, 2019, starting from version 1.9.0, the 3DCityDB Web Map Client is capable of fetching data published from Google Sheets API v4 and a PostgreSQL database with a RESTful API enabled (PostgREST). Data fetched from Google Sheets API and PostgREST can be displayed on the infobox as thematic data when a city object is clicked. Please refer to Section 6.4.4 for a brief tutorial as how to import KML/glTF models with thematic data in the 3D web client.

In addition to the two new supported data sources, it is now also possible to choose their tableType between All object attributes in one row (horizontal) and One row per object attribute (vertical), where:

  • Horizontal: all object attributes are stored in columns of one single row, which means each ID occurs only once in the table. This is applicable if all objects have the same or similar attributes.

    Note

    The thematic data must be stored in the first sheet of the spreadsheet. The first column of this sheet must be called gmlid or GMLID.

    Example:

    gmlid attribute1 attribute2 attribute3 attribute4
    gmlid1 value1 value2 value3 value4
    gmlid2 value1 value2 value3 value4
  • Vertical: each object attribute is stored in one row consisting of three columns ID, Attribute and Value, which means an ID may occur in multiple rows in the table. This is used when the numbers of attributes or attribute names vary greatly between objects.

    Note

    A vertical table must contain exactly three columns in this exact order: gmlid, attribute and value.

    Example:

    gmlid attribute value
    gmlid1 attribute1 value1
    gmlid1 attribute2 value2
    gmlid1 attribute3 value3
    gmlid2 attribute1 value1
    gmlid2 attribute2 value2
    gmlid2 attribute3 value3
    gmlid2 attribute4 value4

For an overview of the responses from the Google Sheets API, please refer to the official documentation.

The response from PostgREST service is encoded in JSON with the following structure:

  • Both the horizontal and vertical mode consist of an array of records marked by the [ ... ].

  • Each record represents a line in the table, where:

    • Each record in vertical mode only has exactly 3 elements: gmlid, attribute name and attribute value. The gmlids here can be duplicated in other records, but the combination of the first two columns must be unique.

      [
         { "gmlid" : "id1", "attribute" : "value_name", "value" : "value" },
         { "gmlid" : "id2", "attribute" : "value_name", "value" : "value" },
         ...
      ]
      
    • On the other hand, each record in the horizontal mode can have more than 2 elements, but the first one must always be gmlid and this must be unique for each record.

Importing KML/glTF models with thematic data

In order to add a KML/glTF data layer along with its linked thematic data to the 3D web client, the parameters must be properly specified (some of which are optional) on the corresponding input panel (cf. Fig. 6.10) which can be expanded and collapsed by clicking on the Add / Configure Layer button in the top left corner of the screen.

Note

All default parameter values used in the 3D web client were chosen accordingly to the standard settings (e.g., the standard predefined tile size is 125m x 125m) specified in the preference settings of the KML/COLLADA/glTF Exporter (cf. Section 4.6.7.1). The parameter name with the suffix (*) denotes that this parameter is mandatory; otherwise it is optional.

_images/3d_web_map_toolbox.png

The input panel for adding a new KML/glTF layer with thematic data in the 3DCityDB Web Map Client

The following information can/should be provided while importing KML/glTF models with thematic data:

Property Description
URL(*) The web link of the master JSON file (cf. Section 4.6) holding the relevant meta-information of the data layer to be imported.
Name(*) A proper layer name must be specified which will be listed at the top of the input panel (in the top left corner of the screen) once the KML/glTF data layer has been successfully loaded into the 3D web client.
Layer data type The type of models to be imported, currently supports: COLLADA/KML/glTF and Cesium 3D Tiles datasets.
Load via proxy (Only for KML datasets) (Only on 3DCityDB websites) Specify if the KML datasets should be loaded using the built-in proxy server hosted in the 3DCityDB server. This can be used for remote KML datasets hosted on servers that do not allow Cross-Origin Resource Sharing (CORS).
KML clamp to ground (Only for KML datasets) Specify if the KML models should be clamped to the ground on the globe. This is useful when the KML dataset does not have correct heights and thus may be hidden under the terrain.
glTF version (Only for glTF datasets) The version of the glTF models being imported. Currently supports: 2.0 (latest), 1.0 and 0.8.
thematicDataUrl The URL of the thematic data source. This could be a Google Spreadsheets e.g. with the following structure https://docs.google.com/spreadsheets/d/<spreadsheet_id> or a table/view published by PostgREST e.g. with the following structure https://example.com:3000/<table_name>.
> Thematic Data Source The thematic data source type, currently supports: Google Sheets API, PostgreSQL REST API and KML Documents as data source.
> Table type The type of tables containing thematic data, currently supports: All object attributes in one row (horizontal) and One row per object attribute (vertical).
cityobjectsJsonUrl The URL of the JSON file which can be generated automatically by using the KML/COLLADA/glTF Exporter (cf. Section 4.6.7.1). For more information please refer to explantation below this table.
minLodPixels and maxLodPixels The minimum and maximum limit of the visibility range for each data layer to control the dynamic loading and unloading of the data tiles. For more information please refer to explantation below this table.
maxCountOfVisibleTiles The maximum number of allowed visible data tiles. For more information please refer to explantation below this table.
maxCountOfVisibleTiles The maximum allowable cache size expressed as a number of data tiles. For more information please refer to explantation below this table.
  • More details on cityobjectsJsonUrl: This JSON file contains a list of GMLIDs of all 3D objects which were exported and might be distributed over different tiles. For every 3D object, it is also stored in which tile it is contained together with its envelope represented using a bounding box in WGS84 lat/lon. These location information can be used to search for a certain 3D object with the help of the Geocoder widget (the lupe symbol in the top right corner of the screen), which has been extended to support a specific geocoding process performed in the following manner: In the input field, either a GMLID of a 3D object or an address can be entered. If an object with the given GMLID is found in the JSON file, the camera perspective will be adjusted to look at the center point of the 3D object with a proper oblique view. If not, the search engine Nominatim for OpenStreetMap shall be used and the map view will be adjusted to the returned location and bounding box.

  • More details on minLodPixels and maxLodPixels: The maximum visibility range can start at 0 and end at an infinite value expressed as -1. Optionally, the user can directly specify the two parameter values within the 3D web client. Otherwise, the parameter values will be achieved from the master JSON file, which also contains the parameters minLodPixels and maxLodPixels and their values which have been specified using the KML/COLLADA/glTF Exporter before performing the export process.

    With these two parameters, the 3D web client implements the so-called Level of Details (LoD) concept which is a common solution being used in 3D computer graphics and GIS (e.g. KML NetworkLinks) for efficient streaming and rendering of tiled datasets. According to the LoD concept, the data tiles with higher resolution should be loaded and visualized when the observer is viewing them from a short distance. When data tiles are far away from the observer, the data tiles with higher resolution should be substituted by the data tiles with lower resolution. In order to realize this LoD concept in the 3D web client, each data tile which is being intersected with the current view frustum will be projected onto the screen while navigating the Earth map. Subsequently, the diagonal length of the projected area on the screen will be calculated by the 3D web client to determine whether the respective data tile should be loaded or unloaded. If the diagonal length is greater than minLodPixels and less than maxLodPixels, the respective data tile will be loaded and displayed; otherwise it will be hidden from display and unloaded. Of course, all data tiles lying outside of the view frustum are unloaded and invisible anyway.

    _images/webmap_determination_tile_loading_fig.png

    Efficient determination of which data tiles should be loaded according to the user-defined visibility range in screen pixel

  • More details on maxCountOfVisibleTiles: Loading massive amounts of data tiles often result in poor performance of the 3D web client or even memory overload of the web browser. This could happen when, for example, the visibility range (determined by the parameters minLodPixels and maxLodPixels) starts at a very small value and ends at an infinite size. In this case, each data tile will always be visualized even though it only takes up a very small screen space. This issue can be avoided by a proper setting of the parameter maxCountOfVisibleTiles. When this limit is reached, any additional data tiles that are farthest away from the camera will not be shown, regardless the size of screen space they occupy. Per default, this parameter receives a value of 200, which is appropriate in most use cases. However, depending on data volume of each tile and the hardware you use, this parameter value has to be adjusted by means of practical tests.

  • More details on maxSizeOfCachedTiles: As mentioned before, the 3D web client implements a caching mechanism allowing for high-speed reloading of those data tiles that have been loaded before and which are stored in the memory of the web browser. In order to prevent memory overload, the parameter maxSizeOfCachedTiles can be applied. With this parameter, the 3D web client implements the so-called Least Recently Used (LRU) algorithm which is a caching strategy being widely used in many computer systems. According to this caching algorithm, newly loaded data tiles will be successively put into the cache. When the cache size limit is reached, the 3D web client will remove the least recently visualized data tiles from the cache. By default, the value of this parameter is set to 200 and can of course be increased to achieve a better viewing experience depending on the hardware you use.

Usage example

In this example, a tiled KML/glTF dataset of buildings in the Manhattan district of New York City (NYC) will be visualized on the 3D web client. This dataset is derived from the semantic 3D city model of New York City (NYC) which has been created by the Chair of Geoinformatics at Technical University of Munich on the basis of datasets provided by the NYC Open Data Portal.

The following parameter values should be entered into the corresponding input fields:

url http://www.3dcitydb.net/3dcitydb/fileadmin/public/3dwebclientprojects/NYC-Model-20170501/Building_gltf/Building_gltf_collada_MasterJSON.json
name Buildings
Layer data type COLLADA/KML/glTF
glTF version 1.0
thematicDataUrl https://docs.google.com/spreadsheets/d/1DbkMUSYW_YlE48MUxH5fak56uaCL8QXNrBgEr0gfuCY
> Thematiic Data Source Google Sheets API
> Table Type All object attributes in one row
cityobjectsJsonUrl  
minLodPixels 125
maxLodPixels -1 or 1.7976931348623157e+308
maxCountOfVisibleTiles 200
maxSizeOfCachedTiles 200

After clicking on Add layer, a data layer will be loaded into the 3D web client and the corresponding layer name Buldings will be listed above the input panel. The Earth map can be zoomed to the extent of the loaded data layer by double-clicking on the layer name. The parameter values of the data layer (its radio button must be activated) can be changed and applied at any time by clicking on the Save layer settings button.

_images/3d_web_client_demo_nyc.png

Screenshot showing how to add a new KML/glTF data layer into the 3D web client

Users are also able to control the visibility of the selected data layers by deactivating the checkbox in front of the layer’s name or clicking on the Remove selected layer button to completely remove the layer from the 3D web client.

Handling Web Map Service data

Cesium supports adding additional imagery layer to the Earth map by using the OGC compliant Web Map Service (WMS). The 3D web client provides a simple widget panel which allows the user to easily add and remove arbitrary number of WMS layers. The widget panel [1] (marked in the following figure) can be expanded and collapsed by clicking on the Add WMS-Layer button on the widget panel.

_images/3d_web_client_wms.png

The input panel [1] for adding a new WMS layer and the BaseLayerPicker widget [2] where the added WMS layers will be listed together with the per default available imagery layers

A user-defined name for labelling the WMS layer has to be firstly specified via the name()* input field. In addition, the iconUrl parameter points to the URL address of an icon image, which will be listed together with the user-defined layer name in the BaseLayerPicker panel [2]. When the mouse pointer is over the icon image, a tooltip will appear which can be specified in the tooltip()* input field. The url parameter value corresponds to the URL address of the WMS server that provides the imagery contents of a WMS layer. According to the WMS specification, a WMS layer is allowed to contain one or more sublayers (listed in the WMS Capabilities file) whose names must be separated by comma and entered into the input field layers()*. Besides the standard WMS HTTP request parameters, additional parameters might be required by some WMS servers. In this case, such additional parameters must be formatted as key=value pairs separated by the “&” character and entered into the additionalParameters input field. The proxyUrl parameter helps the 3D web client to get around the cross-domain issue when performing WMS requests. Since most of the WMS server do not support CORS, a proxy running behind the 3D web client is required. If you use the JavaScript-based HTTP server shipped with the 3D web client, you don’t need to change the default value, since there already exists a built-in proxy running with the relative path “/proxy/”. Otherwise, this parameter value must be adjusted according to the path of the proxy in use.

Usage example:

In this example, a WMS imagery layer provided by the Vorarlberg State Government will be added to and displayed in the 3D web client. The following parameter values should be entered into the corresponding input fields:

_images/3d_web_client_wms_gui.png

Example showing how to add a new WMS layer to the 3D web client

As shown in the figure above, once the parameter settings have been completed, the WMS layer can be loaded by clicking on the Add WMS layer button [3] and its icon image together with its label name [4] will be listed on the BaseLayerPicker widget. You can use the Geocoder widget [5] to zoom the Earth map to the region of Vorarlberg state and check the added WMS layer. Clicking on the Remove WMS layer button [6], the WMS layer will be removed and substituted with the Bing Maps Aerial that is the first item listed on the BaseLayerPicker widget.

Handling Digital Terrain Models

Cesium offers the possibility of high-performance streaming and rendering of Digital Terrain Models (DTM) for the realistic representation of the Earth’s surface. Cesium provides per default two available terrain layers, which can be selected in the BaseLayerPicker [2] widget. The first one is the so-called WGS84 Ellipsoid (default terrain layer) which approximates the Earth’s surface using a smooth ellipsoid surface with a constant height value of 0. The other one is the so-called STK World Terrain (Replaced by Cesium World Terrain starting from September 1st 2018) using a worldwide 3D elevation data with an average grid resolution of 30 meters, which is sufficient in many use cases.

For specific application cases, high-resolution Digital Terrain Models might be required. For this case, the 3D web client provides a simple widget to facilitate handling the terrain data that must be created in a specific terrain format (heightmap or quantized-mesh) defined by Cesium. There exists an open source software tool Cesium Terrain Builder for creating terrain data in heightmap format. The created terrain data is generated in a hierarchical folder structure according to the TMS tiling schema and can be easily published on the web by uploading the terrain data files to a CORS-enabled web server.

The input panel [1] on the 3D web client for adding and removing terrain layers can be expanded and collapsed by clicking on the Add Terrain-Layer button.

_images/3d_web_client_dtm_gui.png

The input panel [1] for adding a new terrain layer and the BaseLayerPicker widget [2] where the added terrain layers will be listed together with the per default available base layers

For adding a new terrain layer, the input fields name()*, iconUrl()*, and tooltip()* in the input panel [1] have to be filled with a proper label name, an URL of an icon image, and a short tooltip respectively. When a terrain layer has been loaded, its icon image together with its label name will be listed in the BaseLayerPicker panel [2]. The tooltip will automatically appear when the mouse is moved over the respective icon image. The url parameter points to the URL of the web server folder where the terrain data are stored.

Usage example

In this example, a high-resolution (0.5m) Digital Terrain Model provided by the Vorarlberg State Government will be added to the 3D web client. This terrain data was created in heightmap format using the open source tool Cesium Terrain Builder. Here, the following parameter values should be entered into the corresponding input fields:

_images/3d_web_client_dtm_gui_numbers.png

Example showing how to add a new terrain layer to the 3D web client

As shown in the figure above, once the parameter settings have been completed, the terrain layer can be loaded by clicking on the Add Terrain layer button [3] and its icon image together with its label name [4] will be listed on the BaseLayerPicker widget. You can use the Geocoder widget [5] to zoom the Earth map to the region of Vorarlberg state and check the loaded terrain data. Clicking on the Remove Terrain layer button [6], the terrain layer will be removed and substituted with the WGS84 Ellipsoid terrain layer.

Interaction with 3D objects

The 3D web client supports rich model interaction such as highlighting of 3D objects on mouse over and mouse click. More than one 3D object can be selected by Ctrl-clicking on them and can also be hidden and redisplayed in the 3D web client interactively. Besides, the user is able to create a screenshot image of the current map view (including the highlighted and hidden 3D objects) or print it directly via the web browser. Moreover, when a 3D object is selected, it can be visually inspected in other third-party mapping applications (Bing Maps, Google Streetview, OpenStreetMap and DualMaps) from multiple view perspectives such as oblique view, street view, or a combined version.

For the sake of clarity, the above mentioned functionalities will be illustrated with the help of a number of screenshots generated based on the online demo Semantic 3D City Model of Berlin which shows all Berlin’s buildings (> 550,000) with textured 3D geometries and many thematic attributes in the 3D web client. You can find the link of this demo via the following web page:

https://github.com/3dcitydb/3dcitydb-web-map

Once the demo was opened in your web browser, you may need to use the Geocoder widget to zoom the Earth map to the building object with the GMLID “BLDG_0003000b0009a940”.

_images/webmap_clicked_object_attribute_table_fig.PNG

By clicking on a building object it will automatically be highlighted and its attribute information will be queried from a Google Fusion Table and displayed in tabular form on the right side of the 3D web client

_images/3d_web_client_object_external_maps.png

By clicking on the dropdown list Show the selected object in External Maps, the user can select one of the given options to explore the selected building object in the chosen mapping application which will be opened in a new browser window or tab

_images/webmap_dualmaps_fig.png

If the option DualMaps has been chosen, the selected building will be shown in a so-called mash-up web application linking different view perspectives, e.g. Google 2D map view, Google Streetview, and Bing Maps oblique view

_images/webmap_multiple_object_highlighting_fig.PNG

A group of building objects can be interactively selected by Ctrl-clicking. Deactivating the selection of a certain building object can be done by Ctrl-clicking on it again

_images/3d_web_client_object_highlight_and_hide.png

The selected building objects can be hidden by clicking on the button Hide selected Objects. The GMLIDs of the selected (highlighted) and hidden building objects can be explored by clicking the drop-down buttons Choose highlighted Object and Choose hidden Object respectively

_images/3d_web_client_object_show_hidden.png

The hidden objects can be shown on the 3D web client again by clicking on the button Show Hidden Objects

_images/3d_web_client_object_clear_highlighting.png

The objects selection and along with the highlighting effect can be deactivated by clicking on the button Clear Highlighting

_images/3d_web_client_object_print_view.png

A screenshot of the current view can be created directly within the 3D web client by clicking on the button Create Screenshot or Print current view

_images/webmap_screenshot_print_fig.png

Once the button Print current view has been clicked on, a printer settings dialog (differs for different web browsers) will appear giving a preview of the screenshot file to be printed

_images/3d_web_client_object_shadow.png

Shadow visualization of the 3D city models can also be activated and deactivated by clicking the Toggle Shadows button

Mobile Support Extension

Starting from version 1.6.0, the 3DCityDB-Web-Map-Client is equipped with an extension that provides better support for mobile devices. The extension comes with a built-in mobile detector, which can automatically detect and adjust the client’s behaviors accordingly to whether the 3D web client is operating on a mobile device. The extension has been tested on several smartphones and tablets running Android and iOS.

Some of the most important mobile features enabled by this extension are listed as follows:

A more lightweight graphical user interface

In order to make the best use of the limited screen real-estate available on mobile devices, some elements are removed or hidden from the web client, such as credit texts and logos, as well as some of Cesium’s built-in navigation controls that can easily be manipulated using touch gestures (see Fig. 6.28).

The main toolbox now scales to fit to the screen size. In case of excess lines/length, the toolbox becomes scrollable (see Fig. 6.29).

The infobox displayed when a city object (e.g. building) is clicked is now displayed in fullscreen with scrollable contents, as illustrated in Fig. 6.30 below.

_images/webmap_mobile_gui_fig.PNG

The 3DCityDB Web Map Client on mobile devices

_images/webmap_mobile_main_toolbox_fig.PNG

The main toolbox on mobile devices

_images/webmap_mobile_infobox_fig.PNG

The infobox on mobile devices

Geolocation-based features

The web client contains a new GPS button (located on the top right corner in the view toolbar) providing new functionalities involving user’s current location and orientation (see Fig. 6.31 and Fig. 6.32). Namely:

  • Location “snapshot” (single-click): shows the user’s current position and orientation.
  • Real-time Orientation Tracking (double-click): periodically shows the user’s current orientation with fixed location.
  • Real-time Compass Tracking + Position (triple-click) or the “First-person View” mode: periodically shows the user’s current orientation and position.
_images/webmap_mobile_symbols_fig.png

From left to right, the 3 modes of geolocation-based features: Location snapshot, Real-time orientation tracking and First-person view

_images/webmap_mobile_first_person_view_fig.PNG

Real-time orientation tracking and First-person View on mobile devices

To disable real-time tracking, simply either click on the button again to return to “snapshot” mode or hold the button for 1 second, the camera will then ascend to a higher altitude of the current location.

Note

The mobile extension makes use of the Geolocation API and the DeviceOrientation API in HTML5. The Geolocation API only works via HTTPS since Google Chrome 50. Therefore, make sure the client is called from a secured page (via SSL/HTTPS). Additionally, permission to retrieve current orientation and location must be granted by the user.

Docker

Currently available here: TUM-GIS 3DCityDB Web-Map Docker

_images/webmap_exampe_displaying_citygml_features_fig.png

Screenshot showing the example of displaying different CityGML top-level features (building, bridge, tunnel, water, vegetation, transportation etc.) in glTF format in the 3D web client

Web Feature Service

Note

This is the documentation of the Web Feature Service version 5.1.

The OGC Web Feature Service Interface Standard (WFS) provides a standardized and open interface for requesting geographic features across the web using platform-independent calls. Rather than sharing geographic information at the file level, for example, the WFS offers direct fine-grained access to geographic information at the feature and feature property level. Web feature services allow clients to only retrieve or modify the data they are seeking, rather than retrieving a file that contains the data they are seeking and possibly much more.

The 3D City Database offers a Web Feature Service interface allowing web-based access to the 3D city objects stored in the database. WFS clients can directly connect to this interface and retrieve 3D content for a wide variety of purposes. Thus, users of the 3D City Database are no longer limited to using the Importer/Exporter tool for data retrieval. The WFS interface is platform-independent and database-independent, and therefore can be easily used to build CityGML-aware applications.

The 3D City Database WFS interface is implemented against version 2.0 of the OGC Web Feature Service standard (OGC Doc. No. 09-025r2) and hence is compliant with ISO 19142:2010. Previous versions of the WFS standard are not supported though. The development of the WFS is led by the company Virtual City Systems that offers an extended version of the WFS with additional capabilities such as, for instance, transaction support through insert, update, replace and delete operations. This additional functionality may be fed back to the open source project in future releases.

System requirements

The 3D City Database WFS is implemented as Java web application based on the Java Servlet technology. It therefore must be run in a Java servlet container on a web server. The following minimum software requirements have to be met:

  • Java servlet container supporting the Java Servlet 3.1 / 3.0 (or higher) specification
  • Java 8 Runtime Environment (Java 7 or earlier versions are not supported)
  • 3D City Database version 3.1 (or higher)

The WFS implementation has been successfully deployed and tested on Apache Tomcat 9 (http://tomcat.apache.org/). This is also the recommended servlet container. Apache Tomcat 8 is also supported. All previous versions of the Apache Tomcat server have reached end of life and are not supported anymore.

Tip

You cannot directly deploy the 3DCityDB WFS on an Apache Tomcat 10 server as this requires Jakarta EE 9 support. If you stil want to use Tomcat 10, you can however automatically convert the WAR file of the WFS so it runs on Tomcat 10. Simply use the open source migration tool for this purpose.

Note

Neither Java nor a servlet container are part of the WFS distribution package and therefore must be properly installed and configured before deploying the WFS. Please refer to the documentation of your favorite servlet container for more information.

Hardware requirements for the web server running the WFS depend on the intended use and number of concurrent accesses. There are no minimum requirements to be met, so make sure your system setup meets your needs.

Access to the individual operations of the WFS service can be secured using IP- and token-based access control rules. Further security mechanisms are not offered by the WFS. So, it is your responsibility as service provider to take any reasonable physical, technical and administrative measures to secure the WFS service and the access to the 3D City Database.

WFS clients connecting to the WFS interface of the 3D City Database must support the OGC WFS standard version 2.0. Moreover, they must be capable of consuming 3D data encoded in CityGML or CityJSON, which is delivered by the WFS server.

Installation

The 3D City Database WFS is shipped as a Java WAR (web archive) file. Please download the WFS distribution package from the GitHub release section or from the 3DCityDB website at https://www.3dcitydb.org.

Note

Alternatively, you may build your own WAR file from the source code provided on GitHub. This requires that you are experienced in building Java web applications from source using Gradle. No further documentation is provided here.

Please follow the following installation steps.

Step 1: Install and properly configure your Java servlet container
Please refer to the documentation of your servlet container for hints on installation and configuration. Make sure that the servlet container uses Java 8 (or higher) for running web applications.

Caution

In contrast to previous versions, it is no longer required to install specific Java libraries such as JDBC drivers as so-called shared or common libs in a global context of your servlet container. All libraries required by the WFS are rather automatically deployed locally in the WEB-INF/lib folder of the web application. No further manual steps are needed.

If you update a previous version of the WFS, you can choose to uninstall the global libraries of the previous version unless they are needed by another web application running on your servlet container. For Apache Tomcat 8 (or higher), you can simply delete the JAR files from the lib folder within the Tomcat installation directory after you have shut down the server. Please refer to the user manual of your previous WFS version to learn which JAR files need to be deleted. Uninstalling the global libraries is optional though and may be skipped. More details on how to manage global libraries can be found in the documentation of your servlet container.

Step 2: Configure your servlet container (optional)
Make sure that your servlet container has enough memory assigned (heap space ~ 1GB or more).

Note

You may, for instance, use the Java command-line option -Xms for this purpose.

Step 3: Deploy the WFS WAR file on your servlet container
If your servlet container is correctly set up and configured, simply deploy the WAR file to install the WFS web service. Again, the way to deploy a WAR file varies for different servlet containers. For Apache Tomcat servers, copy the WAR file into the webapps folder, which, by default, is in the installation directory of the Apache Tomcat server. This will automatically deploy the application. Alternatively, use the web-based Tomcat manager application to deploy WAR files on the server. The manager application is included in a default installation. For more information on deploying WAR files on Tomcat or different servlet containers, please refer to the corresponding documentation material.

Note

If you use the automatic deployment feature of Tomcat as described above, the name of the WAR file will be used as context path in the URL for accessing the application. For example, if the WFS WAR file is named citydb-wfs.war, then the context path of the WFS service will be http[s]://[host][:port]/citydb-wfs/. To pick a different context path, simply rename the WAR file or change Tomcat’s default behavior.

Step 4: Configure the WFS service
The WFS must be configured to meet your needs. For example, this includes providing connection details for the 3D City Database instance and the definition of the feature types that shall be served through the interface. These settings have to be manually edited in the configuration file config.xml of the service. Please check Section 7.3 for how to configure the WFS.

Note

Changes to the config.xml file typically require a reload or restart of the WFS web application (a restart of the servlet container itself is, of course, not required). Please check to documentation of your favorite servlet container for how to do so. In case of Apache Tomcat, you can simply use the manager application to reload web applications.

Step 5: Install ADE extensions (optional)
As a last step, you may install additional CityGML ADE extensions for the WFS. This step is optional and requires a compiled and ready-to-use ADE extension package. Simply copy the contents of the ADE extension package to the WEB-INF/ade-extensions directory of your deployed WFS application. The WEB-INF directory is typically located in the application folder, which is generally named after the WAR file and itself is a subfolder of the webapps folder in the Tomcat installation directory (see Fig. 7.1).

Note

The CityGML ADE must also be registered in the 3DCityDB instance to which your WFS service shall connect.

Configuring the WFS

Capabilities settings

The capabilities settings define the contents of the capabilities document that is returned by the WFS service upon a GetCapabilities request.

The capabilities document is generated dynamically from the contents of the config.xml file at request time. Only mandatory and optional service metadata has to be explicitly specified with the <capabilities> element by the user in addition. All other sections of the capabilities document are populated automatically from the config.xml file. For example, the set of feature types advertised in the <wfs:FeatureTypeList> section is derived from the content of the <featureTypes> element (cf. Section 7.3.2).

The service metadata is provided using the <owsMetadata> child element (see the example listing below). It is copied to the capabilities document “as is” and thus should be consistent and valid.

<capabilities>
  <owsMetadata>
    <ows:ServiceIdentification>
      <ows:Title>3D City Database Web Feature Service</ows:Title>
      <ows:ServiceType>WFS</ows:ServiceType>
      <ows:ServiceTypeVersion>2.0.0</ows:ServiceTypeVersion>
    </ows:ServiceIdentification>
    <ows:ServiceProvider>
      <ows:ProviderName/>
      <ows:ServiceContact/>
    </ows:ServiceProvider>
  </owsMetadata>
</capabilities>

Service metadata comprises information about the service itself that might be useful in machine-to-machine communication or for display to a human. This information is announced through the <ows:ServiceIdentifikation> child element. Mandatory components are the service title (<ows:Title>), the service type (<ows:ServiceType>, which may only take the fixed value WFS), and the supported WFS protocol versions (<ows:ServiceTypeVersion>). The 3DCityDB WFS currently supports the protocol versions 2.0.2 and 2.0.0.

Note

If, for example, the service should only offer the protocol version 2.0.0 to clients, then only provide one <ows:ServiceTypeVersion> element for this version. This is recommended if the software accessing the WFS does only support version 2.0.0 (e.g., FME 2018/2019). Invalid values of the <ows:ServiceIdentifikation> element will be overridden with reasonable default values at startup of the WFS service.

The child element <ows:ServiceProvider> contains information about the service provider such as contact information. Please refer to the OGC Web Services Common Specification (OGC 06-121r3:2009) to get an overview of the supported metadata fields that may be included in the capabilities document and therefore can be specified in <owsMetadata>.

Feature type settings

With the feature type settings, you can control which feature types can be queried from the 3D City Database and are served through the WFS interface. Every feature type that shall be advertised to a client must be explicitly listed in the config.xml file.

An example of the corresponding <featureTypes> XML element is shown below. In this example, CityGML Building and Road objects are available from the WFS service. In addition, a third feature type IndustrialBuilding coming from a CityGML ADE is advertised.

<featureTypes>
  <featureType>
    <name>Building</name>
    <ows:WGS84BoundingBox>
      <ows:LowerCorner>-180 -90</ows:LowerCorner>
      <ows:UpperCorner>180 90</ows:UpperCorner>
    </ows:WGS84BoundingBox>
  </featureType>
  <featureType>
    <name>Road</name>
    <ows:WGS84BoundingBox>
      <ows:LowerCorner>-180 -90</ows:LowerCorner>
      <ows:UpperCorner>180 90</ows:UpperCorner>
    </ows:WGS84BoundingBox>
  </featureType>
  <adeFeatureType>
    <name namespaceURI="http://www.citygml.org/ade/TestADE/1.0">IndustrialBuilding</name>
    <ows:WGS84BoundingBox>
      <ows:LowerCorner>-180 -90</ows:LowerCorner>
      <ows:UpperCorner>180 90</ows:UpperCorner>
    </ows:WGS84BoundingBox>
  </adeFeatureType>
  <version isDefault="true">2.0</version>
  <version>1.0</version>
</featureTypes>

The <featureTypes> element contains one <featureType> node per feature type to be advertised. The feature type is specified through the mandatory name property, which can only take values from a fixed list that enumerates the names of the CityGML top-level features (cf. config.xsd schema file). In addition, the geographic region covered by all instances of this feature type in the 3D City Database can optionally be announced as bounding box (lower left and upper right corner). The coordinate values must be given in WGS 84.

Note

The bounding box is neither automatically checked against nor computed from the database, but rather copied to the WFS capabilities document “as is”.

Feature types coming from a CityGML ADE are advertised using the <adeFeatureType> element. In contrast to CityGML feature types, the name property must additionally contain the globally unique XML namespace URI of the CityGML ADE, and the type name is not restricted to a fixed enumeration. Note that a corresponding ADE extension must be installed for the WFS service, and that the ADE extension must add support for the advertised ADE feature type. Otherwise, the ADE feature type is ignored. If you do not have ADE extensions, then simply skip the <adeFeatureType> element.

Besides the list of advertised feature types, also the CityGML version that shall be supported for encoding features in a response to a client’s request has to be specified. Use the <version> element for this purpose, which takes either 2.0 (for CityGML 2.0) or 1.0 (for CityGML 1.0) as value. If both versions shall be supported, simply use two <version> elements. However, in this case, you should define the default version to be used by the WFS by setting the isDefault attribute to true on one of the elements (otherwise, CityGML 2.0 will be the default).

Note

If your <featureTypes> element contains CityGML or ADE feature types that are not available for the chosen CityGML version, they are automatically removed from the list and are not advertised to clients by the WFS server.

Operations settings

The operations settings are used to define the WFS operations that shall be available to clients. The Simple WFS conformance class mandates that every WFS must at least support the operations GetCapabilities, DescribeFeatureType, ListStoredQueries, DescribeStoredQueries and the stored query GetFeatureById. These operations therefore have not to be explicitly listed in the <operations> element to be offered by the WFS.

<operations>
  <requestEncoding>
    <method>KVP+XML</method>
    <useXMLValidation>true</useXMLValidation>
  </requestEncoding>
  <GetPropertyValue isEnabled="true"/>
  <GetFeature>
    <outputFormats>
      <outputFormat name="application/gml+xml; version=3.1"/>
      <outputFormat name="application/json"/>
    </outputFormats>
  </GetFeature>
  <ManageStoredQueries isEnabled="true"/>
</operations>

Request encoding

The <requestEncoding> element determines whether the WFS shall support XML-encoded and/or KVP-encoded requests. The desired method is chosen using the <method> child element that accepts the values KVP, XML and KVP+XML (default: KVP+XML). When setting the <useXMLValidation> child element to true, all XML encoded operation requests sent to the WFS are first validated against the WFS and CityGML XML schemas. Requests that violate the schemas are not processed but instead a corresponding error message is sent back to the client. Although XML validation might take some milliseconds, it is highly recommended to always set this option to true to avoid unexpected failures due to XML issues.

GetFeature operation

For the <GetFeature> operation, the <outputFormats> element lets you choose the available output formats that can be used in encoding the response to the client. The value “application/gml+xml; version=3.1” is the default and basically means that the response to a GetFeature operation will be purely XML-encoded (using CityGML as encoding format with the version specified in the feature type settings, cf. Section 7.3.2). In addition, the WFS can advertise the output format “application/json”. In this case, the response is delivered in CityJSON format. CityJSON is a JSON-based encoding of a subset of the CityGML data model.

Note

The WFS can only advertise the different output formats in the capabilities document. It is up to the client though to choose one of these output formats when requesting feature data from the WFS.

For CityGML, the following additional options are available.

Output format options for CityGML
Option
Description
prettyPrint
Formats the XML response document using additional line breaks and indentations (boolean true / false, default: false).

The CityJSON output format options are presented below.

Output format options for CityJSON
Option
Description
prettyPrint
Formats the JSON response document using additional line breaks and indentations (boolean true / false, default: false).
significantDigits
Maximum number of digits for vertices (integer, default: 3). Identical vertices are snapped.
significantTextureDigits
Maximum number of digits for texture coordinates (integer, default: 7). Identical texture coordinates are snapped.
transformVertices
Apply the CityJSON-specific compression (boolean true / false, default: false).
addSequenceIdWhenSorting
If the response document shall be sorted (by using a fes:SortBy expression), then this option allows for adding a sequenceId attribut to each CityJSON object that maps the sorting order. This is required because CityJSON itself does not support sorting (boolean true / false, default: false).
generateCityGMLMetadata
Adds an attribute called CityGMLMetadata that contains CityGML-specific metadata like the data types of generic attributes (boolean true / false, default: true).
removeDuplicateChildGeometries
CityJSON does not support reusing or referencing geometries. If geometries are reused within a city object in the database (e.g. between a Building and its BuildingInstallation child), they will be duplicate in the CityJSON output. Use this option to remove duplicate geometries from child objects. If the child object does not have any remaining geometry after removing the duplicates, it will be removed as well (boolean true / false, default: false).

The options are simply added beneath the corresponding <outputFormat> element and are applied to all response documents of the WFS in that format. The following snippet illustrates the use of the CityJSON format options.

<outputFormat name="application/json">
  <options>
    <option name="prettyPrint">true</option>
    <option name="significantDigits">5</option>
    <option name="significantTextureDigits">5</option>
    <option name="transformVertices">true</option>
    <option name="addSequenceIdWhenSorting">true</option>
    <option name="generateCityGMLMetadata">true</option>
    <option name="removeDuplicateChildGeometries">true</option>
  </options>
</outputFormat>

GetPropertyValue operation

Per default, the GetPropertyValue operation is not offered by the WFS service. In order to make this operation available to clients, the isEnabled attribute of the <GetPropertyValue> element has to be set to true (default: false).

DescribeFeatureType operation

The <DescribeFeatureType> operation lets you define the list of supported <outputFormats> similar to the GetFeature operation. This way you can enable clients to choose between the CityGML XML schemas or the CityJSON JSON schema for describing feature types.

Manage Stored Queries

To advertise the operations CreateStoredQuery and DropStoredQuery for the server-side management of stored queries, the element <ManageStoredQueries> has to be included and its attribute isEnabled has to be set to true (default: false).

Filter capabilities settings

The WFS service supports logical, comparison and spatial filter predicates in query operations to specify how city objects in the 3D City Database should be filtered to produce a result set. The filter capabilities settings specify which filter operations shall be available for clients and thus can be used in requests to the server.

The <filterCapabilities> element is used to enumerate the filter operators that are advertised to clients. The following listing shows an example <filterCapabilities> element that contains all filter operations that are supported by the WFS implementation.

<filterCapabilities>
  <scalarCapabilities>
    <logicalOperators>true</logicalOperators>
    <comparisonOperators>
      <operator>PropertyIsEqualTo</operator>
      <operator>PropertyIsNotEqualTo</operator>
      <operator>PropertyIsLessThan</operator>
      <operator>PropertyIsGreaterThan</operator>
      <operator>PropertyIsLessThanOrEqualTo</operator>
      <operator>PropertyIsGreaterThanOrEqualTo</operator>
      <operator>PropertyIsLike</operator>
      <operator>PropertyIsNull</operator>
      <operator>PropertyIsNil</operator>
      <operator>PropertyIsBetween</operator>
    </comparisonOperators>
  </scalarCapabilities>
  <spatialCapabilities>
    <operator>BBOX</operator>
    <operator>Equals</operator>
    <operator>Disjoint</operator>
    <operator>Touches</operator>
    <operator>Within</operator>
    <operator>Overlaps</operator>
    <operator>Intersects</operator>
    <operator>Contains</operator>
    <operator>DWithin</operator>
    <operator>Beyond</operator>
  </spatialCapabilities>
</filterCapabilities>

Simply remove single or multiple items from this list in order to reduce the filter capabilities of your WFS service. For instance, if you do not want clients to issue spatial queries against your WFS service, simply remove the <spatialCapabilities> element. At minimum, every WFS supports querying objects by their gml:id identifier through the ResourceId operation to satisfy the WFS Simple conformance class. For this reason, the ResourceId operation is mandatory and thus not part of the <filterCapabilities> enumeration.

Please refer to the OGC Filter Encoding 2.0 Encoding Standard version 2.0 (OGC 09-026r2) for a comprehensive documentation of the filter semantics.

Constraints settings

The <constraints> element of the config.xml allows for defining constraints on dedicated WFS operations.

<constraints>
  <supportAdHocQueries>true</supportAdHocQueries>
  <countDefault>500</countDefault>
  <computeNumberMatched>true</computeNumberMatched>
  <useDefaultSorting>false</useDefaultSorting>
  <currentVersionOnly>true</currentVersionOnly>
  <exportCityDBMetadata>false</exportCityDBMetadata>
  <exportAppearance>false</exportAppearance>
  <useResultPaging>true</useResultPaging>
  <stripGeometry>false</stripGeometry>
  <lodFilter isEnabled="true" mode="or" searchMode="depth" searchDepth="1">
    <lod>1</lod>
    <lod>2</lod>
  </lodFilter>
</constraints>

The following tables shows which WFS operation is affected by which constraint.

Overview of WFS operations affected by constraints
Contraint
WFS operations
supportAdHocQueries
GetPropertyValue, GetFeature
countDefault
GetPropertyValue, GetFeature
computeNumberMatched
GetPropertyValue, GetFeature
useDefaultSorting
GetPropertyValue, GetFeature
currentVersionOnly
GetPropertyValue, GetFeature
exportCityDBMetadata
GetPropertyValue, GetFeature
exportAppearance
GetPropertyValue, GetFeature
useResultPaging
GetPropertyValue, GetFeature
stripGeometry
GetPropertyValue, GetFeature
lodFilter
GetPropertyValue, GetFeature

supportAdHocQueries

If the WFS shall support “ad hoc” queries besides stored queries, then the <supportAdHocQueries> constraint has to be set to true (default: true). Stored queries are predefined queries that are stored by the server and at most allow setting predefined parameters. In contrast, ad hoc queries enable clients to send their own queries with arbitrary filter expressions to the WFS service.

countDefault

The <countDefault> constraint restricts the number of city objects to be returned by the WFS to the user-defined value, even if the request is satisfied by more city objects in the 3D City Database. The default behavior is to return all city objects matching a request. If a maximum count limit is defined, then this limit is automatically advertised in the server’s capabilities document using the CountDefault constraint.

computeNumberMatched

By default, the WFS server determines the total number of city objects or values that match the request and returns this number as numberMatched attribute in the response document. Computing that number might take a long time on large databases. By setting the constraint <computeNumberMatched> to false (default: true), the number of matches is not computed and “unknown” is returned as value for the numberMatched attribute.

In addition to numberMatched, the numberReturned attribute denotes the number of city objects or values that are actually contained in the response document. This value must always be determined for every response document. To keep response times short for large databases, it is therefore not only recommended to set <computeNumberMatched> to false, but also to restrict the number of returned objects or values already in the query by using, for instance, reasonable filter expressions or the count and startIndex parameters.

useDefaultSorting

Using the fes:SortBy clause of a query, a client can request a specific sorting of the objects in the response document. If no fes:SortBy clause is present, then the WFS will not automatically apply a default sorting. Although strictly speaking this behavior violates a conformance requirement of the WFS 2.0 specification, requests without sorting can be processed and answered faster.

To change the default behaviour, the <useDefaultSorting> constraint has to be set to true (default: false). The result set is then automatically sorted by the database ID of the city objects in case the client does not specify a fes:SortBy clause. On the one hand, this ensures that subsequent invocations of the same query on the same set of data result in a response document that presents the objects in the same order. And, on the other hand, the WFS complies to the corresponding conformance requirements.

currentVersionOnly

City objects can be marked as being terminated in the 3DCityDB using the attribute terminationDate without having to physically delete them from the database. This allows keeping the object history in the database and querying previous object versions based on timestamps. The current version of a city object can be simply identified by the fact that the column TERMINATION_DATE of the CITYOBJECT is not set. If a client wants to explicitly request the current or a specific previous version of the city objects, a corresponding filter expression on core:terminationDate must be defined for the query. Otherwise, the response may contain duplicates of the same city object at different core:terminationDate times.

Setting an appropriate filter can be quickly forgotten or might not be supported by a given WFS client software. By default, the WFS therefore returns only the current version of a city object in responses to GetPropertyValue and GetFeature requests by automatically adding a filter expression on core:terminationDate. If you want to suppress this default behavior, you can set the <currentVersionOnly> constraint to false (default: true). In this case a user of the WFS must ensure to define a proper filter expression on each request. If a 3DCityDB is used without object history, the default behavior can be safely deactivated. Clearly, without this additional filter, requests can be answered faster by the WFS server.

exportCityDBMetadata

The <exportCityDBMetadata> (default: false) flag allows exporting the metadata properties LINEAGE, UPDATING_PERSON, LAST_MODIFICATION_DATE and REASON_FOR_UPDATE of city objects stored in the table CITYOBJECT. Since these properties are not defined by CityGML, the WFS uses a CityGML Application Domain Extension (ADE) to include the properties in the response document. More information about this 3DCityDB ADE is available in Section 4.5.8.10.

exportAppearance

The WFS supports the export of appearance properties (i.e., material and texture information) of city objects. However, this requires setting <exportAppearance> to true. The default value for this constraint is false, so that no appearance properties are returned by the WFS by default. The export includes both local and global appearances. Since global appearances are not stored as inline attributes of the city objects but rather as individual top-level features, they are returned within the <wfs:additionalObjects> element of the response document in accordance with the WFS specification.

Note

Further settings for exporting appearances can be found in Section 7.3.8.

useResultPaging

Result paging is the ability of a client to scroll through a set of response features or values, n features or values at a time much like one scrolls through the response from a search engine one page at a time. In order for paging to be triggered, either the count parameter shall be set on the request or the WFS server shall implement a default count value (see countDefault constraint). Result paging is accomplished following the previous and next URLs defined on the response document.

Result paging is enabled by default for the WFS. To disable it, simply set the <useResultPaging> constraint to false (default: true). Whether result paging is available is also advertised in the server’s capabilities document using the ImplementsResultPaging constraint.

Note

Further settings in the context of result paging can be found in Section 7.3.8.

stripGeometry

When setting <stripGeometry> to true (default: false), the WFS will remove all spatial properties from a city object before returning the city object to the client. Thus, the client will not receive any geometry values.

lodFilter

The <lodFilter> constraint defines a server-side filter on the LoD representations of the city objects. When using this constraint, city objects in a response document will only contain those LoD levels that are enumerated using one or more <lod> child elements of <lodFilter>. Further LoD representations of a city object, if any, are automatically removed. If a city object satisfies a query but does not have a geometry representation in at least one of the specified LoD levels, it will be skipped from the response document and thus not returned to the client.

The default behavior of the LoD filter can be adapted using attributes on the <lodFilter> element. The mode attribute defines how the selected LoDs should be evaluated and can take one of the values shown described below.

Available filter modes
Filter mode
Description
or
City objects having a spatial representation in at least one of the selected LoDs will be exported. Additional LoD representations of the city object that do not match the user selection are not exported.
and
Only city objects having a spatial representation in all of the selected LoDs will be exported. Additional LoD representations of the city object that do not match the user selection are not exported.
minimum
This is a special version of the Or mode that only exports the lowest LoD representation from the matching ones. The exported LoD may therefore differ for each city object.
maximum
This is a special version of the Or mode that only exports the highest LoD representation from the matching ones. The exported LoD may therefore differ for each city object.

The default mode value is or. When setting the searchMode attribute to depth, then you can use the additional searchDepth attribute to specify how many levels of nested city objects shall be considered when searching for matching LoD representations. If searchMode is set to all, then all nested city objects will be considered (default: searchMode = depth, searchDepth = 1).

The following example illustrates the use of the seachDepth attribute. Assume a Building feature having a nested BuildingInstallation sub-feature and a nested WallSurface sub-feature as direct children. Moreover, the BuildingInstallation itself has a nested RoofSurface sub-feature.

<bldg:Building><bldg:outerBuildingInstallation>
    <bldg:BuildingInstallation>
      <bldg:boundedBy>
        <bldg:RoofSurface></bldg:RoofSurface>
      </bldg:boundedBy>
    </bldg:BuildingInstallation>
  </bldg:outerBuildingInstallation><bldg:boundedBy>
    <bldg:WallSurface></bldg:WallSurface>
  </bldg:boundedBy></bldg:Building>

When setting search depth to “1” in this example, not only the bldg:Building but also its nested bldg:BuildingInstallation and bldg:WallSurface are searched for a matching LoD representation, but not the bldg:RoofSurface of the bldg:BuildingInstallation. This roof surface is on the nesting depth 2 when counted from the bldg:Building. Thus, search depth would have to be set to “2” to also consider this bldg:RoofSurface feature.

Note

The more levels you enter for the searchDepth attribute, the more complex the resulting SQL queries for the 3DCityDB will get.

Postprocessing settings

Note

These settings are only applied when a client chooses CityGML as output format.

The postprocessing settings allow for specifying XSLT transformations that are applied on the CityGML data of a WFS response before sending the response to the client.

<postProcessing>
  <xslTransformation isEnabled="true">
    <stylesheet>AdV-coordinates-formatter.xsl</stylesheet>
  </xslTransformation>
</postProcessing>

To enable transformations, set the isEnabled attribute on the <xslTransformation> child element to true. In addition, provide one or more <stylesheet> elements enumerating the XSLT stylesheets that shall be applied in the transformation. The stylesheets are supposed to be stored in the xslt-stylesheets subfolder of the WEB-INF folder of your WFS application. Thus, any relative path provided as <stylesheet> will be resolved against WEB-INF/xslt-stylesheets/. You may alternatively provide an absolute path pointing to another location in your local file system. However, note that the WFS web application must have appropriate access rights to this location.

If you provide more than one XSLT stylesheet, then the stylesheets are executed in the given sequence of the <stylesheet> elements, with the output of a stylesheet being the input for its direct successor.

Note

  • To be able to handle arbitrarily large exports, the export process reads single top-level features from the database, which are then written to the target file. Thus, each XSLT stylesheet will just work on individual top-level features but not on the entire file.
  • The output of each XSLT stylesheet must again be a valid CityGML structure.
  • Only stylesheets written in the XSLT language version 1.0 are supported.

Database settings

The database settings define the connection parameters for connecting to the 3D City Database instance the WFS service should give access to. Moreover, spatial reference systems (SRS) that shall be supported by the WFS server in addition to the reference system of the 3DCityDB have to defined here.The contents of the <database> element are shown below.

<database>
  <referenceSystems>
    <referenceSystem id="WGS84">
      <srid>4326</srid>
      <gmlSrsName>http://www.opengis.net/def/crs/epsg/0/4326</gmlSrsName>
      <description>WGS 84</description>
    </referenceSystem>
  </referenceSystems>
   <connection
   initialSize="10"
   maxActive="100"
   maxIdle="50"
   minIdle="0"
   suspectTimeout="60"
   timeBetweenEvictionRunsMillis="30000"
   minEvictableIdleTimeMillis="60000">
    <description/>
    <type>PostGIS</type>
    <server/>
    <port>5432</port>
    <sid/>
    <schema/>
    <workspace/>
    <user/>
    <password/>
  </connection>
</database>

Connection parameters

For the <connection> element, provide the type of the database (PostGIS or Oracle), the server name (network name or IP address) and port number (default: 5432 for PostgreSQL; 1521 for Oracle) of the database server, the sid (when using PostgreSQL, enter the database name; for Oracle, enter the database SID or service name), and the user and password of the database user. You can copy&paste these settings from the config file of the Importer/Exporter. Use the optional schema element if you want to connect to a schema other than the default schema. The description is optional and can be left empty.

For Oracle databases, you can additionally specify the workspace to connect to in case your database is version-enabled. All operations will then be executed against this workspace. You must provide the <name> of the workspace and an optional <timestamp> as child elements. If no workspace is specified, the default LIVE workspace is chosen by default.

Hint

The WFS service can only give access to one database instance. If you want to change this database, you need to adapt the config.xml file and restart the service afterwards. If you want to realize WFS interfaces for several database instances, you need to deploy the WFS service multiple times.

In addition to these minimum settings, the <connection> element takes optional attributes that let you configure the use of physical connections to the database server. This is especially important for production servers and if more than one WFS service connects to the same database server (in this case, you should also carefully configure the database itself). The attributes together with their meaning are described in the following table.

Optional database connection settings.
Attribute
Description
initialSize
(int) the initial number of physical connections that are created
when the database connection is established (default: 10).
maxActive
(int) The maximum number of active connections to the
database that can be allocated at the same time (default: 100).
NOTE – make sure your database is configured to handle this
number of parallel active connections.
maxIdle
(int) The maximum number of connections that should be kept
active at all times (default: 50). Idle connections are checked
periodically (if enabled) and connections that have been idle
for longer than minEvictableIdleTimeMillis will be
released. (also see testWhileIdle)
minIdle
(int) The minimum number of established connections that
should be kept active at all times (default: 0). The connection
pool can shrink below this number if validation queries fail.
maxWait
(int) The maximum number of milliseconds that the service will
wait (when there are no available connections) for a connection
before throwing an exception (default: 30000, i.e. 30 seconds).
testOnBorrow
(boolean) The indication of whether connections will be
validated before being used by the service. If the connections
fails to validate, it will be dropped, and the service will attempt
to borrow another (default: false). NOTE - for a true value to
have any effect, the validationQuery parameter must be set
to a non-null string. In order to have a more efficient
validation, see validationInterval.
testOnReturn
(boolean) The indication of whether connections will be
validated before being returned to the internal connection pool
(default: false). NOTE - for a true value to have any effect, the
validationQuery parameter must be set to a non-null string.
testWhileIdle
(boolean) The indication of whether connections will be
validated by the idle connections evictor (if any). If a
connections fails to validate, it will be dropped (default: false).
NOTE - for a true value to have any effect, the
validationQuery parameter must be set to a non-null string.
validationQuery
(String) The SQL query that will be used to validate
connections. If specified, this query does not have to return
any data (default: null). Example values are “select 1 from
dual” (Oracle) or “select 1” (PostgreSQL).
validationClassName
(String) The name of a class which implements the
org.apache.tomcat.jdbc.pool.Validator interface and
provides a no-arg constructor (may be implicit). If specified,
the class will be used to instead of any validation query to
validate connections (default: null). NOTE – for a non-null
value to have any effect, the class has to be implemented by
you as part of the source code of the WFS service. Use with
care.
timeBetweenEvictionRunsMillis
(int) The number of milliseconds to sleep between runs of the
idle connection validation/cleaner. This value should not be
set under 1 second. It dictates how often we check for idle,
abandoned connections, and how often we validate idle
connections (default: 30000, i.e. 30 seconds).
minEvictableIdleTimeMillis
(int) The minimum amount of time a connection may be idle
before it is eligible for eviction (default: 60000, i.e. 60
seconds).
removeAbandoned
(boolean) Flag to remove abandoned connections if they
exceed the removeAbandonedTimout. If set to true a
connection is considered abandoned and eligible for removal
if it has been in use longer than the
removeAbandonedTimeout See also logAbandoned (default:
false).
removeAbandonedTimeout
(int) Timeout in seconds before an abandoned (in use)
connection can be removed (default: 60, i.e. 60 seconds). The
value should be set to the longest running query.
logAbandoned
(boolean) Flag to log stack traces for application code which
abandoned a connection. NOTE - this adds overhead for
every connection borrow (default: false).
connectionProperties
(String) The connection properties that will be sent to the
JDBC driver when establishing new connections. Format of
the string must be [propertyName=property;]* NOTE - The
“user” and “password” properties will be passed explicitly, so
they do not need to be included here (default: null).
initSQL
(String) A custom query to be run when a connection is first
created (default: null).
validationInterval
(long) To avoid excess validation, only run validation at most
at this frequency - time in milliseconds. If a connection is due
for validation, but has been validated previously within this
interval, it will not be validated again (default: 30000, i.e. 30
seconds).
jmxEnabled
(boolean) Register the internal connection pool with JMX or
not (default: true).
fairQueue
(boolean) Set to true if connection requests should be treated
fairly in a true FIFO fashion (default: true)
abandonWhenPercentageFull
(int) Connections that have been abandoned (timed out) will
not get closed and reported up unless the number of
connections in use are above the percentage defined by
abandonWhenPercentageFull. The value should be between
0-100 (default: 0, which implies that connections are eligible
for closure as soon as removeAbandonedTimeout has been
reached).
maxAge
(long) Time in milliseconds to keep connections alive. When a
connection is returned to the internal pool, it will be checked
whether now - time-when-connected > maxAge has been
reached, and if so, the connection is closed (default: 0, which
implies that connections will be left open and no age check
will be done).
suspectTimeout
(int) Timeout value in seconds (default: 0).

Schema name

The optional <schema> element can be used to define the database schema the WFS service shall access for answering requests and executing transactions. For PostgreSQL, <schema> has to reference a schema (default: citydb) within the database given by the <sid> parameter of the connection details. Under Oracle, <schema> refers to another user account. The <user> provided in the connection details therefore requires sufficient privileges to access the database tables and objects of the user <schema>. The 3DCityDB is shipped with with database scripts to create new schemas under PostgreSQL, or to grant read-only access to a different user account under Oracle.

Spatial reference systems

Additional spatial reference systems the WFS service should support have to be listed within the <referenceSystems> element. Provide the srid (spatial reference ID) of the SRS. This value depends on the database system (PostgreSQL/PostGIS or Oracle) running the 3DCityDB. Be careful to pick the correct value. In most cases it will match the EPSG code of the SRS. The gmlSrsName element defines the string identifier of the SRS that has to be used by clients in requests. You are not free to pick an arbitrary identifier but should follow the OGC best practice for encoding SRS names (see WFS 2.0 specification document for details). The description is optional and can be left empty.

Note

The WFS always supports the SRS of the 3DCityDB per default. Thus, this SRS has not be explicitly defined in the config.xml.

Server settings

Server-specific settings are available through the <server> element in the config.xml file.

<server>
  <externalServiceURL>http://server:port/context-path</externalServiceURL>
  <maxParallelRequests>30</maxParallelRequests>
  <waitTimeout>60</waitTimeout>
  <responseCacheTimeout>300</responseCacheTimeout>
  <enableCORS>true</enableCORS>
  <timeZone>Europe/Berlin</timeZone>
  <textureServiceURL>http://server:port/context-path</textureServiceURL>
  <textureCache isEnabled="true">
    <localCachePath>/some/local/path</localCachePath>
  </textureCache>
  <security isEnabled="true">
    <accessControl>
      <scope operation="GetFeature"/>
      <allow ip="192.168.10.0/24 172.16.0.0/16"/>
      <allow token="25f8feac-2b25-4e03-b7fc-7634bc4be863"/>
      <deny ip="::1"/>
    </accessControl>
  </security>
  <tempCache>
    <mode>database</mode>
  </tempCache>
</server>

externalServiceURL

The external service URL of the WFS can be denoted using the <externalServiceURL> element. The URL should include the protocol (typically http or https), the server name and the full context path where the service is available for clients. Also announce the port on which the service listens if it is not equal to the default port associated with the given protocol.

Note

The service URL is not configured through <externalServiceURL>. It rather follows from your servlet container settings and network access settings (e.g., if your servlet container is behind a reverse proxy). The <externalServiceURL> value is only used in the capabilities document and thus announced to a client. Most clients rely on the service URL in the capabilities document and will send requests to this URL. So, make sure that the WFS is available at the <externalServiceURL> provided in the config.xml.

maxParallelRequests

The <maxParallelRequests> value defines how many requests will be handled by the WFS service at the same time (default: 30). If the number of parallel requests exceeds the given limit, then new requests are blocked until active requests have been fully processed and the total number of active requests has fallen below the limit. Note that this parameter mainly affects requests that require a database connection. To disable the request limit, simply set <maxParallelRequests> to 0.

Note

Every WFS can only open a maximum number of physical connections to the database system running the 3D City Database instance. This upper limit is set through the maxActive attribute on the <connection> element (cf. Section 7.3.7). Since every request may use more than one connection, make sure that the number of <maxParallelRequests> is below the maximum number of physical connections.

waitTimeout

In case an incoming request is blocked because the maximum number of parallel requests has been reached, the <waitTimeout> option lets you specify the maximum time in seconds the WFS service waits for a free request slot before sending an error message to the client (default: 60 seconds).

responseCacheTimeout

If result paging is enabled for the 3DCityDB WFS (see Section 7.3.5), the <responseCacheTimeout> parameter can be used to define the length of time (in seconds) that responses shall be cached for the purpose of paging using the next and previous URLs in the response document (default: 300 seconds).

enableCORS

The flag <enableCORS> (default: true) allows for enabling Cross-Origin Resource Sharing (CORS). Usually, the Same-Origin-Policy (SOP) forbids a client to send Cross-Origin requests. If CORS is enabled, the WFS server sends the HTTP header Access-Control-Allow-Origin with the value * in the response.

Note

When enabling CORS support through the WFS service, global settings for the HTTP header Access-Control-Allow-Origin on the level of the servlet container are overridden. If such global CORS settings are configured for your servlet container, it therefore might be better to deactivate the WFS-based CORS support (set <enableCORS> to false).

Please refer to the documentation of your servlet container for information about how to enable CORS support on the level of the servlet container. For instance, check this URL for the Apache Tomcat 9.0 documentation.

timeZone

The optional <timeZone> parameter is used to set the time zone of the WFS service to a specific value. The time zone is relevant, for instance, to process Date and TimeStamp data from the database. The parameter expects an official time zone ID, either an abbreviation such as “PST”, a full name such as “Europe/Berlin”, or a custom ID such as “GMT-08:00”. If no <timeZone> is provided, the time zone of servlet container running the WFS is used as default. In most scenarios, this default setting should be fine.

Note

Note that if a time zone is provided but cannot be set (e.g. due to an invalid or unsupported ID), the start of the WFS service is aborted with an error message. Subsequent requests to the service also result in an error message.

textureServiceURL

In case the WFS has been configured to export appearances of city objects (see Section 7.3.5), the appearance information itself is encoded as CityGML <Appearance> element in a response document to a GetFeature request (or using similar structures in alternative output formats such as CityJSON). Texture images, however, are not delivered by the WFS service itself but through a separate REST interface.

This RESTful texture image service is part of the WFS web application and, thus, is automatically started with the WFS service. Assume that http://[host][:port]/citydb-wfs/ is the context path of your WFS service (see Section 7.2 for more details). Then the URL of the REST service will be http://[host][:port]/citydb-wfs/texture/. This URL is used in the response document to reference texture images in the following way:

http[s]://[host][:port]/citydb-wfs/texture/[bucket]/[filename]

The [bucket] path element is an integer value under control of the REST service and is used to organize the texture images into separate subfolders. The [filename] of the texture image is also managed by the REST service and may differ from the filename stored in the 3DCityDB to ensure unique names. The following CityGML snippet illustrates how texture images are referenced based on this scheme in a WFS response document. A client consuming this document can easily follow the URL to download the texture image.

<bldg:Building gml:id="BLDG_0815"><app:appearance>
    <app:Appearance>

      <app:surfaceDataMember>
        <app:ParameterizedTexture>
          <app:imageURI>http://some.host.com/citydb-wfs/texture/3/tex_2.jpg</app:imageURI></app:target>
      </app:surfaceDataMember>
    </app:Appearance>
  </app:appearance></bldg:Building>

The optional <textureServiceURL> element lets you change the external URL of the REST service that is used in the response document. By default, the URL is composed from the request of the client, and this will already be appropriate in most cases. If an <externalServiceURL> is specified (see above), then this value will be used for creating the URL to the texture image. The <textureServiceURL> element allows you to override the default behavior and to use a dedicated value for the REST service.

textureCache

By default, every time a client requests a texture image through the REST service, the image is queried anew from the 3DCityDB. In order to reduce database traffic, the REST service can use a local texture cache instead. Simply set the isEnabled attribute on the <textureCache> element to true to make use of this feature. You can provide a <localCachePath> pointing to your local file system where the texture cache should be stored. Make sure that this path is both read and write accessible to the WFS service. If you omit the <localCachePath> element, the cache will be created in the WEB-INF/texture_cache folder within your web application.

Note

Texture images can be served faster to the client when using a texture cache. Enabling the texture cache is therefore the recommended setting. Note that depending on the number and size of texture images stored in your 3DCityDB instance, the texture cache might require substantial space on your hard disk.

security

Individual WFS operations can be secured using IP- and token-based access control rules. If an access rule has been defined for an operation, then this operation may only be invoked by clients having explicit access permission. Otherwise, the execution of the operation is denied and a corresponding error message is sent back to the client. The <security> element can therefore be used to control, for example, that only specific clients are allowed to request city objects from the database.

To use access rules, the isEnabled attribute of the <security> element must first be set to true. The rules are then given by one or more <accessControl> child element. Each <accessControl> element can define its scope by enumerating the WFS operations to which it shall be applied. The WFS operations must simply be listed using the operation attribute of the <scope> element. The allowed values are defined as fixed enumeration in the config.xsd schema file. If more than one operation shall be on the list, then a white space must be used as delimiter. If the <scope> is omitted, then the <accessControl> element applies to all WFS operations.

Access to the operations of an <accessControl> element is either granted or restricted through <allow> and <deny> elements. An <accessControl> element may have multiple <allow> and <deny> child elements in an arbitrary order. The ip attribute of both elements is then used to define the IP addresses of the clients that shall be affected by the rule. The value of the ip attribute can be a simple IP address, but notations based on subnet masks and IP ranges are also supported. Moreover, both IPv4 and IPv6 addresses can be used. More than one IP address target can be listed on the ip attribute using a single white space as delimiter.

In addition to IP addresses, one or more access token can be defined for <allow> elements using the token attribute. A token is an arbitrary character string that must be sent by a client on each request in order to get access. Independent of whether the request is sent using HTTP Get or HTTP Post, the token must be provided as separate parameter of the form token=<string>. Tokens can be useful, for example, if requests are forwarded using internal proxy servers.

The following simple scheme is used to decide whether the request of a client will be processed or rejected:

  • If the <security> settings are inactive because isEnabled is set to false, then all requests of all clients will be processed (default behavior).
  • If the WFS operation invoked by the client is not covered by any <accessControl> element, then the request will be processed.
  • If the WFS operation invoked by the client is addressed by one or more <accessControl> elements, then the request will be rejected if the client fulfills one of the <deny> rules. But even if no <deny> rule matches, the request still will only be processed if at least one <allow> rule is applicable.

Note

Note that a client must always sent a token if one or more tokens are defined for the operation. Otherwise, the request will also be rejected immediately.

Caution

Further security mechanisms besides the <security> settings are not offered by the WFS. So, it is your responsibility as service provider to take any reasonable physical, technical and administrative measures to secure the WFS service and the access to the 3DCityDB.

tempCache

When exporting data, the WFS must keep track of various temporary information. For instance, when resolving XLinks, the gml:id values as well as additional information about the related features and geometries must be available. This information is kept in main memory for performance. However, when memory limits are reached, the cache is written to temporary tables in the database.

By default, temporary tables are created in the 3D City Database instance itself. The tables are populated during the export operation and are automatically dropped after the operation has finished. Alternatively, the <tempCache> settings let a user choose to store the temporary information in the local file system instead. For this purpose, the <mode> property has to be switched from its default value database to local. The optional <localPath> parameter can be used to define the location where the temporary information should be stored. Without setting <localPath>, the temporary directory of the web application is used as default location.

Some reasons for using a local, file-based storage are:

  • The 3D City Database instance is kept clean from any additional (temporary) table holding temporary process information. Please choose a fast local storage device with sufficient storage place.
  • If the WFS runs on a different machine than the 3D City Database instance, sending temporary information over the network might be slow. In such cases, using a local storage might help to increase performance.

Logging settings

The WFS service logs messages and errors that occur during operations to a dedicated log file by default. Entries in the log file are associated with a timestamp, the severity of the event and the IP address of the client (if available). The log is normally stored in the file WEB-INF/wfs.log within the application folder of the WFS web application.

The <logging> element in the config.xml file is used to adapt these default settings. The attribute logLevel on the <file> child element lets you change the severity level for log messages that shall be recorded in the log file to debug, info, warn, or error (default: info). Additionally, the <fileName> element lets you define an alternative absolute path and filename where to store the log file.

Note

A web application typically has limited access to the file system for security reasons. Thus, make sure that the log file is accessible for the WFS web application. Check the documentation of your servlet container for details.

If you want log messages to be printed to the console via STDOUT and STDERR, then simply set the <console> child element. The <console> element also provides a logLevel attribute to define the severity level. You can pick different log levels for the console and the log file. Printing log messages to the console is useful, for example, when the 3DCityDB WFS is running in a Docker container (see Section 7.6) or a debug environment.

Logging for the 3DCityDB WFS can be configured to either use a log file, or the console, or both.

<logging>
  <console logLevel="info"/>
  <file logLevel="info">
    <fileName>path/to/your/wfs.log</fileName>
  </file>
</logging>

Caution

Log messages are continuously written to the same log file. The WFS application does not include any mechanism to truncate or rotate the log file in case the file size grows over a certain limit. So make sure you configure log rotation on your server.

After deploying but before using the WFS service, you need to edit the config.xml file to make the service run properly. The config.xml file is located in the WEB-INF directory of the WFS web application. If you use Apache Tomcat, WEB-INF is a subfolder of the application folder, which is generally named after the WAR file and itself is a subfolder of the webapps folder in the Tomcat installation directory. This may be different if you use another servlet container.

For example, assume that the WFS web application was deployed under the context name citydb-wfs. Then the location of the WEB-INF folder and the config.xml file in a default Apache Tomcat installation is shown below.

_images/wfs_config_file_path_fig.png

Location of the WEB-INF folder and the config.xml file.

Open the config.xml file with a text or XML editor of your choice and manually edit the settings. In the config.xml file, the WFS settings are organized into the main XML elements <capabilities>, <featureTypes>, <operations>, <filterCapabilities>, <constraints>, <postProcessing>, <database>, <server>, and <logging>. The discussion of the settings follows this organization in the subsequent clauses.

Configuration settings of the WFS service
Settings
Description
Define service metadata that shall be used in the capabilities document of the WFS service.
Control which feature types shall be advertised and served by the WFS service.
Define the operation-specific behaviour of the WFS.
Define the filter operations that shall be supported in queries.
General constraints that influence the capabilities of the WFS service and of the advertised operations.
Allow for specifying XSLT transformations to be applied to the CityGML data before sending the response to the client.
Connection details to use for connecting to a 3D City Database instance.
Server-specific options and parameters.
Logging-specific settings like the log level and output file to use.

Caution

An XML Schema for validating the contents of the config.xml file is provided as file config.xsd in the subfolder schemas. After every edit to the config.xml file, make sure that the it validates against this schema before reloading the WFS web application. Otherwise, the application might refuse to load, or unexpected behavior may occur.

Environment variables

In addition to the config.xml file, the WFS supports the following environment variables to configure further settings. The variables must have been set prior to starting the service. They always take precedence over corresponding settings in the config.xml file.

Environment variables supported by the WFS service
Environment variable
Description
CITYDB_TYPE
Used to specify the database system of the 3DCityDB the WFS service shall connect to. Allowed values are postgresql for PostgreSQL/PostGIS databases (default) and oracle for Oracle Spatial/Locator databases.
CITYDB_HOST
Host name or IP address of the server on which the database is running.
CITYDB_PORT
Port of the database server to connect to. Default value is 5432 for PostgreSQL and 1521 for Oracle, depending on the setting for CITYDB_TYPE.
CITYDB_NAME
Used to specify the name of the 3DCityDB instance to connect to. When connecting to an Oracle database, provide the database SID or service name as value.
CITYDB_SCHEMA
Schema to use when connecting to the database. The defaults are citydb for PostgreSQL and the username specified through CITYDB_USERNAME for Oracle, depending on the setting for CITYDB_TYPE.
CITYDB_USERNAME
Connect to the database sever with this user.
CITYDB_PASSWORD
The password to use when connecting to the database server.
WFS_CONFIG_FILE
With this variable, you can specify a configuration file that shall be used instead of the default config.xml file in the WB-INF directory when starting the WFS service. The variable must provide the full path to the configuration file. The WFS service must have read access to this file.
WFS_ADE_EXTENSIONS_PATH
Allows for providing an alternative directory where the WFS service shall search for ADE extensions (default: ade-extensions folder in the WEB-INF directory). The WFS service must have read access to this directory.

Using the WFS

The Web Feature Service is implemented against version 2.0 of the OGC Web Feature Service Interface Standard. Previous versions are not supported any more, and clients must make sure to use this version of the interface when sending requests to the WFS service.

The following chapters provide a documentation of the functionality offered by the 3D City Database Web Feature Service. They do not provide a general overview or description of the OGC Web Feature Service Interface Standard itself. If you need more general information about WFS, please refer to the WFS specification document instead (OGC Doc. No. 09-025r2).

Basic functionality

WFS operations

The OGC WFS 2.0 interface defines eleven operations that can be invoked by a client. A WFS server is not required to offer all operations to conform to the standard but may support a subset only. The following table lists all WFS 2.0 operations and marks those supported by the 3D City Database WFS.

Overview of supported WFS 2.0 operations
Operation
Description
Supported
GetCapabilities
The GetCapabilities operation generates a service metadata document describing the WFS service provided by a server.
X
DescribeFeatureType
The DescribeFeatureType operation returns a schema description of the CityGML feature types offered by the WFS instance.
X
GetFeature
The GetFeature operation returns a selection of CityGML features from the 3D City Database using a query expression.
X
GetPropertyValue
The GetPropertyValue operation allows the value of a feature property or part of the value of a complex feature property to be retrieved from the 3D City Database for a set of features identified using a query expression.
X
ListStoredQueries
The ListStoredQueries operation lists the stored queries available at the server.
X
DescribeStoredQuery
The DescribeStoredQueries operation provides detailed metadata about each stored query expression that the server offers.
X
CreateStoredQuery
A stored query may be created using the CreateStoredQuery operation.
X
DropStoredQuery
The DropStoredQuery operation allows previously created stored queries to be dropped from the system.
X
Transaction
The Transaction operation is used to describe data transformation operations (i.e., insert, update, replace, delete) to be applied to CityGML feature instances under the control of the web feature service.
LockFeature
The LockFeature operation is used to expose a long-term feature locking mechanism to ensure consistency in data manipulation operations (e.g., update or delete).
GetFeatureWithLock
The GetFeatureWithLock operation is functionally similar to the GetFeature operation except that in response to a GetFeatureWithLock operation, the WFS shall also lock the features in the result set.
Service URL

The service URL or service endpoint is the location where the 3D City Database WFS can be accessed by a client application over a local network or the internet. This URL is typically composed as follows:

http[s]://[host][:port]/[context_path]/wfs

The actual URL depends on the servlet container and your network configuration. Please ask your network administrator for the protocol (typically http or https), the host name and the port of the server. The context path is typically added to the URL by the servlet container. Please refer to the documentation of your servlet container for more information. The last component wfs of the URL identifies the service and makes sure that requests are routed to the WFS service implementation.

Note

For Apache Tomcat, the name of the WFS WAR file will be used as context path in the service URL. For example, if the WAR file is named citydb-wfs.war, then the service URL will be http[s]://[host][:port]/citydb-wfs/wfs. To pick a different context path, simply rename the WAR file or change Tomcat’s default behavior.

Service bindings

A service binding refers to the communication protocol that shall be used for exchanging request and response messages between a WFS server and a client. The WFS 2.0 interface standard defines HTTP GET, HTTP POST and SOAP over HTTP POST as possible service bindings for WFS 2.0 implementations.

The 3D City Database WFS implements both the HTTP POST and the HTTP GET conformance class. Therefore, a client can choose to send a request either XML-encoded using the HTTP method POST (using text/xml as content type) or KVP-encoded (key-value-pair) using the HTTP method GET. Use the config.xml to determine which method the WFS server should support (see Section 7.3.3).

Note

The WFS specification does not define a KVP encoding for all operations. These operations must therefore be XML-encoded and sent to the server through HTTP POST. Also note that the XML content of POST messages sent to the server must be well-formed and valid with respect to the WFS 2.0 XML Schema

The following table summarizes the operations and the supported service binding as offered by the 3D City Database WFS.

Service bindings for the supported WFS 2.0 operations.
Operation
Service Binding
GetCapabilities
XML over HTTP POST and KVP over HTTP GET
DescribeFeatureType
XML over HTTP POST and KVP over HTTP GET
GetFeature
XML over HTTP POST and KVP over HTTP GET
GetPropertyValue
XML over HTTP POST and KVP over HTTP GET
ListStoredQueries
XML over HTTP POST and KVP over HTTP GET
DescribeStoredQuery
XML over HTTP POST and KVP over HTTP GET
CreateStoredQuery
XML over HTTP POST
DropStoredQuery
XML over HTTP POST and KVP over HTTP GET
CityGML feature types

The 3D City Database WFS supports all CityGML top-level feature types, and corresponding feature instances will be sent to the client upon request. If you just want to advertise a subset of the CityGML feature types, you can restrict the feature types in the config.xml settings (cf. Section 7.3.2). In addition to the predefined CityGML feature types, the WFS can also support feature types defined in a CityGML ADE. This requires a corresponding ADE extension to be installed for the WFS and to be registered with the 3DCityDB instance (cf. Section 7.2).

The supported CityGML feature types together with their official XML namespaces (CityGML version 2.0 and 1.0) and recommended prefixes are listed in the table below.

Supported CityGML top-level feature types with XML namespaces and prefixes.
Feature type
XML namespace
XML prefix
Building
bldg
Bridge
brid
Tunnel
tun
TransportationComplex
tran
Road
tran
Track
tran
Road
tran
Square
tren
Railway
tran
CityFurniture
frn
LandUse
luse
WaterBody
wtr
PlantCover
veg
SolitaryVegetationObject
veg
ReliefFeature
dem
GenericCityObject
gen
CityObjectGroup
grp

Simply declare the above namespaces in XML-encoded requests using the notation xmlns:prefix=namspace_uri and use the feature type name to request corresponding features from the WFS. If you pick a CityGML 2.0 namespace, the response will be encoded in CityGML 2.0. If you rather want the response to be encoded in CityGML 1.0, choose a CityGML 1.0 namespace instead. Remember to list the CityGML versions to be supported by the WFS in the config.xml file (see Section 7.3.2).

For KVP-encoded requests, the NAMESPACES parameter must be used to declare namespaces and their prefixes used in the request based on the format xmlns(prefix, escaped_uri).

Note

The 3DCityDB WFS automatically supports the default CityGML prefixes in KVP-encoded requests. Thus, if you pick one of the default prefixes from the list above, you do not have to use the NAMESPACES parameter.

The CityGML version that will be associated with the prefix depends on the default CityGML version in your config.xml (cf. Section 7.3.2). If you want to request a specific version instead, you can append a “1” for CityGML 1.0 and a “2” for CityGML 2.0 to the prefix. For example, use the prefix bldg2 to request features in CityGML 2.0. If you do not want to rely on these predefined prefixes, you can always use the NAMESPACES parameter instead

Exception reports

If the WFS encounters an error while parsing or processing a request, an XML document indicating that error is generated and sent to the client as exception response. Please refer to the WFS 2.0 specification for the structure and syntax of the exception response.

GetCapabilities

The GetCapabilities operation generates an XML-encoded service metadata document describing the WFS service provided by a server. The capabilities document contains relevant technical and non-technical information about the service and its provider. Its content mainly depends on the configuration of the WFS in the config.xml settings file.

The following XML snippet shows an XML encoding of a GetCapabilities operation.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetCapabilities service="WFS"
 xmlns:wfs="http://www.opengis.net/wfs/2.0"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://www.opengis.net/wfs/2.0
 http://schemas.opengis.net/wfs/2.0/wfs.xsd"/>

The declaration of the WFS XML namespace http://www.opengis.net/wfs/2.0 is mandatory to be able to validate the request against the official WFS XML Schema definition. The reference to the schema location using the xsi:schemaLocation attribute is however optional. It is recommended though if the XML encoding of the request is created manually by the user (and not automatically by a client software) to ensure schema validity. By default, the WFS service will reject invalid requests (see Section 7.3.3).

The following table shows the XML attributes that can be used in the GetCapabilities request and are supported by the WFS implementation.

Supported XML attributes of a GetCapabilities operation. (O = optional, M = mandatory)
XML attribute
O / M
Default value
Description
service
M
WFS (fixed)
The service attribute indicates the service type. The value “WFS” is fixed.
AcceptVersions
O

Used for version number negotiation with the WFS server (cf. OGC Document No. 06-121r3:2009).

As alternative to XML encoding, the GetCapabilities operation may also be invoked through a KVP-encoded HTTP GET request.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
REQUEST=GetCapabilities&
ACCEPTVERSIONS=2.0.0,2.0.2

The available KVP parameters are listed below.

Supported KVP parameters of a GetCapabilities operation. (O = optional, M = mandatory)
KVP parameter
O / M
Default value
Description
SERVICE
M
WFS (fixed)
see above
ACCEPTVERSIONS
O

see above

DescribeFeatureType

The DescribeFeatureType operation returns a schema description of the feature types advertised by the 3D City Database WFS instance. Which feature types are offered by the WFS is controlled through the config.xml settings file (cf. Section 7.4.1.4). The schema defines the structure and content of the features (thematic and spatial attributes, nested features, etc.) as well as the way how features are encoded in responses to GetFeature requests.

The following example shows a valid DescribeFeatureType operation requesting the XML Schema definition of the CityGML 1.0 Building feature type.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:DescribeFeatureType service="WFS" version="2.0.0"
 xmlns:wfs="http://www.opengis.net/wfs/2.0"
 xmlns:bldg="http://www.opengis.net/citygml/building/1.0">
  <wfs:TypeName>bldg:Building</wfs:TypeName>
</wfs:DescribeFeatureType>

The DescribeFeatureType operations takes the following XML attributes.

Supported XML attributes of a DescribeFeatureType operation. (O = optional, M = mandatory)
XML attribute
O / M
Default value
Description
service
M
WFS (fixed)
The service attribute indicates the
service type. The value “WFS” is fixed.
version
M
2.0.x
The version of the WFS Interface
Standard to be used in the
communication.
outputFormat
O
application/gml+xml;
version=3.1
Controls the format of the schema
description. By default, the request
results in a CityGML / GML 3.1.1
application schema. The outputFormat
attribute may also take the value
“application/json”, in which case the
response is a CityJSON schema document.
handle
O

The handle parameter allows a client to
associate a mnemonic name with the
request that will be used in exception
reports.

The <wfs:TypeName> child element of the DescribeFeatureType operation identifies the feature type for which the XML Schema description is requested. Be careful to use the correct spelling of the feature type name (as specified by the CityGML standard) and to associate the name with the correct CityGML XML namespace (see Section 7.4.1.4). The <wfs:TypeName> element may occur multiple times to request schema definitions of several feature types in a single DescribeFeatureType operation. If the <wfs:TypeName> element is omitted, then the complete base schema is returned by the WFS.

The DescribeFeatureType operation can alternatively be invoked through HTTP GET with key-value pairs.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.2&
REQUEST=DescribeFeatureType&
TYPENAME=bldg:Building,tran:Road

The following KVP parameters are supported.

Supported KVP parameters of a DescribeFeatureType operation. (O = optional, M = mandatory)
KVP parameter
O / M
Default value
Description
SERVICE
M
WFS (fixed)
see above
VERSION
M
2.0.x
see above
NAMESPACES
O

Used to specify namespaces and their
prefixes. The format shall be
xmlns(prefix,escaped_url).
TYPENAME
M

A comma-separated list of feature types
to describe.
OUTPUTFORMAT
O
application/gml+xml;
version=3.1
see above

The TYPENAME attribute lists the feature types to describe. Similar to an XML-encoded request, both the feature type names and the XML namespaces must be correct. XML namespaces and their prefixes can be specified using the NAMESPACES attribute. If you use default CityGML prefixes though, the NAMESPACES attribute can be skipped (see Section 7.4.1.4).

GetFeature

The GetFeature operation returns a selection of CityGML features from the 3DCityDB that satisfy the query expression provided by the client. The query expression can restrict the feature instances that shall be presented in the response document through logical, comparison and spatial filter predicates. Moreover, the feature properties to be included in the response document can be explicitly enumerated and thus restricted.

A simple GetFeature using the predefined GetFeatureById stored query is shown below. The gml:id of the city object that shall be returned by the WFS is passed as id parameter.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature service="WFS" version="2.0.0"
 xmlns:wfs="http://www.opengis.net/wfs/2.0">
  <wfs:StoredQuery id="http://www.opengis.net/def/query/OGC-WFS/0/GetFeatureById">
    <wfs:Parameter name="id">ID_0815</wfs:Parameter>
  </wfs:StoredQuery>
</wfs:GetFeature>

The WFS will answer the above request with either the CityGML city object(s) whose gml:id value matches ID_0815 or with an exception report in case no matching city object was found in the 3D City Database.

If a GetFeature request results in more than one city objects or consists of more than one (stored) query expression, the response will be wrapped by one or more <wfs:FeatureCollection> elements. Please refer to the WFS 2.0 specification for details on the encoding of the response document.

The GetFeature operation can be influenced by the following XML attributes.

Supported XML attributes of a GetFeature operation. (O = optional, M = mandatory)
XML attribute
O / M
Default value
Description
service
M
WFS (fixed)
The service attribute indicates the service type. The value “WFS” is fixed.
version
M
2.0.x
The version of the WFS Interface Standard to be used in the communication.
handle
O

The handle parameter allows a client to associate a mnemonic name with the request that will be used in exception reports.
outputFormat
O
application/gml+xml;
version=3.1
Controls the encoding of the response. Per default, the WFS uses CityGML / GML 3.1.1. The outputFormat attribute may also take the value “application/json”, in which case the response is encoded in CityJSON.
count
O
unlimited
The maximum number of features to be returned by the WFS service.
startIndex
O
0
The startIndex parameter indicates the index within the result set from which the server shall begin presenting results in the response document.
The first index is 0.
resultType
O
results
If the value of the resultType parameter is set to “results” the server generates a response document containing features that satisfy the operation.
If set to “hits” the server generates an empty response document indicating the count of the total number of features that the operation would return.

The following query illustrates how to fetch all city objects of a specific feature type from the 3D City Database.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature service="WFS" version="2.0.0"
  xmlns:wfs="http://www.opengis.net/wfs/2.0" xmlns:fes="http://www.opengis.net/fes/2.0"
  xmlns:bldg="http://www.opengis.net/citygml/building/2.0">
  <wfs:Query typeNames="bldg:Building"/>
</wfs:GetFeature>

The requested feature type is given as value of the typeNames attribute on the <wfs:Query> element. Make sure to declare and use the correct CityGML namespaces (see Section 7.4.1.4). The above query would return all Building instances from the database encoded in CityGML 2.0. Of course, the WFS has to advertise the feature type and CityGML version (see Section 7.3.2). To get the number of all buildings in the database (without actually fetching the buildings themselves), the resultType attribute has to be set to hits.

The count attribute lets you restrict the number of returned instances. And startIndex can be used to indicate that the response document shall begin with the n-th object from the result set (the first object is at index 0). Thus, both parameters allow accessing a specific subset of the objects that fulfill the request. If result paging is enabled for the WFS (see Section 7.3.5) and at least the count parameter is set, then the client can also use the previous and next URLs to scroll through the result set.

If you want to query both Building and CityFurniture instances with a single GetFeature request, you simply have to use two <wfs:Query> child elements. The WFS also supports to fetch all instances of a given feature type and all its (transitive) subtypes using the schema-element() function. For instance, typeNames="schema-element(core:_CityObject)" will return all city object instances.

The <wfs:Query> element allows the following XML attributes.

Supported XML attributes of the wfs:Query element. (O = optional, M = mandatory)
XML attribute
O / M
Default value
Description
typeNames
M

The typeNames attribute defines the feature type of the instances to be returned
handle
O

The handle parameter allows a client to associate a mnemonic name with the request that will be used in exception reports.
srsName
O
same as in database
If the srsName attribute is provided, a coordinate transformation into the provided SRS is applied to the feature instances in the result set.

In general, the WFS returns all instances of the requested feature type including all mandatory and optional thematic and spatial properties. In order to apply a projection to the properties of the feature type, simply enumerate the properties to be fetched using one or more <wfs:PropertyName> element. If you want to restrict the instances to be returned according to thematic or spatial criteria, you can provide a <fes:Filter> expression that denotes how city objects should be filtered to produce the result set.

In addition to XML-encoded GetFeature requests, this WFS also supports KVP-encoded requests. Using KVP encoding, one or more ad-hoc queries or precisely one stored query can be expressed. Also complex filter expressions can be encoded. The following example shows a simple GetFeature request to query all Building objects within a given bounding box. Note that the values of the parameters NAMESPACES and SRSNAME would have to be URL encoded in this example for the query to work. This has been omitted for the sake of clarity though.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.2&
REQUEST=GetFeature&
NAMESPACES=xmlns(bldg,http://www.opengis.net/citygml/building/2.0)&
TYPENAMES=bldg:Building&
BBOX=18.54,-72.3544,18.62,-72.2564&
SRSNAME=http://www.opengis.net/def/crs/epsg/0/4326

There are many parameters available for the KVP encoding of the GetFeature operation. Some of them are mutually exclusive or depend on each other. Please refer to the WFS specification (OGC Doc. No. 09-025r2) for an overview of all parameter dependencies.

Supported KVP parameters of a GetFeature operation. (O = optional, M = mandatory)
KVP parameter
O / M
Default value
Description
SERVICE
M
WFS (fixed)
see above
VERSION
M
2.0.x
see above
NAMESPACES
O

Used to specify namespaces and their prefixes. The format shall be xmlns(prefix,escaped_url).
COUNT
O
unlimited
see above
STARTINDEX
O
0
see above
OUTPUTFORMAT
O
application/gml+xml;
version=3.1
see above
RESULTTYPE
O
results
see above
Additional KVP parameters for ad-hoc queries only. (O = optional, M = mandatory)
KVP parameter
O / M
Default value
Description
TYPENAMES
M

see above
SRSNAME
O
same as in database
see above
PROPERTYNAME
O

List of properties (encoded as as QName) that shall be included in the response (projection).
FILTER
O

Filter expression encoded using the language specified by FILTER_LANGUAGE.
FILTER_LANGUAGE
O
queryLanguage:
OGC-FES:Filter
Filter language (default: XML encoding according to OGC FES specification).
Additional KVP parameters for stored queries only. (O = optional, M = mandatory)
KVP parameter
O / M
Default value
Description
STOREDQUERY_ID
M

The identifier of the stored query to invoke.
storedquery_parameter
=value
O

Each parameter of the stored query shall be encoded in KVP as key-value pair.

Ensure to use proper XML namespaces for feature type names, property names and XML-encoded filter expressions. XML namespaces and their prefixes can be specified using the NAMESPACES attribute. However, the WFS can correctly deal with the default CityGML prefixes. An additional definition via the NAMESPACES attribute is therefore obsolete when using the default prefixes.

Example 1

This example fetches a subset of properties of the Building feature type. The specific building instances that are retrieved by the request are identified through a ResourceId filter. In addition, the response shall be sorted by the bldg:measuredHeight property of the matching buildings. The output format is set to CityJSON.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature service="WFS" version="2.0.0" outputFormat="application/json"
  xmlns:wfs="http://www.opengis.net/wfs/2.0" xmlns:fes="http://www.opengis.net/fes/2.0"
  xmlns:core="http://www.opengis.net/citygml/2.0" xmlns:bldg="http://www.opengis.net/citygml/building/2.0">
  <wfs:Query typeNames="bldg:Building">
    <wfs:PropertyName>core:externalReference</wfs:PropertyName>
    <wfs:PropertyName>bldg:class</wfs:PropertyName>
    <wfs:PropertyName>bldg:address</wfs:PropertyName>
    <wfs:PropertyName>bldg:lod2Solid</wfs:PropertyName>
    <fes:Filter>
      <fes:ResourceId rid="ID_0815"/>
      <fes:ResourceId rid="ID_0816"/>
      <fes:ResourceId rid="ID_0817"/>
    </fes:Filter>
    <fes:SortBy>
      <fes:SortProperty>
        <fes:ValueReference>bldg:measuredHeight</fes:ValueReference>
      </fes:SortProperty>
    </fes:SortBy>
  </wfs:Query>
</wfs:GetFeature>

The equivalent KVP-encoding is shown below.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.0&
REQUEST=GetFeature&
TYPENAMES=bldg:Building&
PROPERTYNAME=core:externalReference,bldg:class,bldg:address,bldg:lod2Solid&
RESOURCEID=ID_0815,ID_0816,ID_0817&
SORTBY=bldg:measuredHeight&
OUTPUTFORMAT=application%2Fjson

Example 2

In this example, all road objects carrying a generic integer attribute of name lanes are fetched. An XPath expression is required to reference the gen:name attribute of the complex property gen:intAttribute. Note that the matchCase attribute of PropertyIsEqualTo is set to false in order to operate case insensitive. The count attribute ensures that at most 10 roads are contained in the response document. If response paging is enabled for your WFS (see Section 7.3.5), the response will contain next and previous links that allow a client to browse through the entire result set.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature service="WFS" version="2.0.0" count="10" xmlns:wfs="http://www.opengis.net/wfs/2.0"
  xmlns:fes="http://www.opengis.net/fes/2.0" xmlns:gen="http://www.opengis.net/citygml/generics/1.0"
  xmlns:tran="http://www.opengis.net/citygml/transportation/1.0">
  <wfs:Query typeNames="tran:Road">
    <fes:Filter>
      <fes:PropertyIsEqualTo matchCase="false">
        <fes:ValueReference>gen:intAttribute/@gen:name</fes:ValueReference>
        <fes:Literal>lanes</fes:Literal>
      </fes:PropertyIsEqualTo>
       </fes:Filter>
  </wfs:Query>
</wfs:GetFeature>

Example 3

This example extends the previous one by fetching all road objects whose generic lanes attributes is greater than two.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature service="WFS" version="2.0.0" count="10" xmlns:wfs="http://www.opengis.net/wfs/2.0"
  xmlns:fes="http://www.opengis.net/fes/2.0" xmlns:gen="http://www.opengis.net/citygml/generics/1.0"
  xmlns:tran="http://www.opengis.net/citygml/transportation/1.0">
  <wfs:Query typeNames="tran:Road">
    <fes:Filter>
      <fes:PropertyIsGreaterThan>
        <fes:ValueReference>gen:intAttribute[@gen:name='lanes']/gen:value</fes:ValueReference>
        <fes:Literal>2</fes:Literal>
      </fes:PropertyIsGreaterThan>
    </fes:Filter>
  </wfs:Query>
</wfs:GetFeature>

The equivalent KVP-encoding is shown below. Again, note that the filter expression has to be URL encoded. Otherwise the XML notation will not be correctly transported to the server. This has been omitted for the sake of clarity.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.0&
REQUEST=GetFeature&
TYPENAMES=tran:Road&
COUNT=10&
FILTER=<fes:Filter>
       <fes:PropertyIsGreaterThan>
       <fes:ValueReference>gen:intAttribute[@gen:name='lanes']/gen:value</fes:ValueReference>
       <fes:Literal>2</fes:Literal>
       </fes:PropertyIsGreaterThan>
       </fes:Filter>

The URL encoding of the above filter expression is given in the following.

FILTER=%3Cfes%3AFilter%3E%3Cfes%3APropertyIsGreaterThan%3E%3Cfes%3AValueReference%3Egen%3AintAttribute%5B%40gen%3A
name%3D%27lanes%27%5D%2Fgen%3Avalue%3C%2Ffes%3AValueReference%3E%3Cfes%3ALiteral%3E2%3C%2Ffes%3ALiteral%3E%3C%2Ffes%3A
PropertyIsGreaterThan%3E%3C%2Ffes%3AFilter%3E

Example 4

In this example, buildings are fetched if they contain a nested RoofSurface feature with a photovoltaic suitability class between 1 and 3. The pv_class attribute is modeled as generic integer attribute of the roof surface features.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature service="WFS" version="2.0.0" xmlns:wfs="http://www.opengis.net/wfs/2.0"
  xmlns:fes="http://www.opengis.net/fes/2.0" xmlns:gen="http://www.opengis.net/citygml/generics/1.0"
  xmlns:bldg="http://www.opengis.net/citygml/building/1.0">
  <wfs:Query typeNames="bldg:Building">
    <fes:Filter>
      <fes:PropertyIsBetween>
        <fes:ValueReference>bldg:boundedBy/bldg:RoofSurface/gen:intAttribute[@gen:name='pv_class']/gen:value</fes:ValueReference>
        <fes:LowerBoundary>
          <fes:Literal>1</fes:Literal>
        </fes:LowerBoundary>
        <fes:UpperBoundary>
          <fes:Literal>3</fes:Literal>
        </fes:UpperBoundary>
      </fes:PropertyIsBetween>
    </fes:Filter>
  </wfs:Query>
</wfs:GetFeature>

Example 5

This example returns the number (resultType="hits") of buildings in Berlin along the road Unter den Linden. Note that the address is not queried using elements of the xAL address language since xAL is way too flexible. Instead, an ADE has been defined to make address queries as simple as possible (read more in Section 4.5.8.10).

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature service="WFS" version="2.0.0" resultType="hits"
  xmlns:wfs="http://www.opengis.net/wfs/2.0" xmlns:fes="http://www.opengis.net/fes/2.0"
  xmlns:core="http://www.opengis.net/citygml/1.0"
  xmlns:bldg="http://www.opengis.net/citygml/building/1.0"
  xmlns:citydb="http://www.3dcitydb.org/citygml-ade/3.0/citygml/1.0">
  <wfs:Query typeNames="bldg:Building">
    <fes:Filter>
      <fes:And>
        <fes:PropertyIsEqualTo>
          <fes:ValueReference>bldg:address/core:Address/citydb:city</fes:ValueReference>
          <fes:Literal>Berlin</fes:Literal>
        </fes:PropertyIsEqualTo>
        <fes:PropertyIsLike wildCard="*" singleChar="." escapeChar="/">
          <fes:ValueReference>bldg:address/core:Address/citydb:street</fes:ValueReference>
          <fes:Literal>Unter den Linden*</fes:Literal>
        </fes:PropertyIsLike>
      </fes:And>
    </fes:Filter>
  </wfs:Query>
</wfs:GetFeature>

Example 6

This query fetches all city objects whose gml:Envelope geometry stored in the gml:boundedBy property Intersects a given polygon. Note the usage of the schema-element() function. It allows you to request instances of the feature type provided as argument and of all its (transitive) subtypes. The reference system of the query geometry is denoted by the srsName attribute on the <gml:Polygon> element. If you omit the srsName attribute, the reference system of the 3D City Database is assumed.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature xmlns:wfs="http://www.opengis.net/wfs/2.0" service="WFS" version="2.0.0"
  outputFormat="application/gml+xml; version=3.1" xmlns:gml="http://www.opengis.net/gml"
  xmlns:core="http://www.opengis.net/citygml/2.0" xmlns:fes="http://www.opengis.net/fes/2.0">
  <wfs:Query typeNames="schema-element(core:_CityObject)">
    <fes:Filter>
      <fes:Intersects>
        <fes:ValueReference>gml:boundedBy</fes:ValueReference>
        <gml:Polygon srsName="http://www.opengis.net/def/crs/epsg/0/4326">
          <gml:exterior>
            <gml:LinearRing>
              <gml:posList>13.3077157 52.5101551 13.3095932 52.5101551 13.3095932 52.511115
                13.3077157 52.511115 13.3077157 52.5101551</gml:posList>
            </gml:LinearRing>
          </gml:exterior>
        </gml:Polygon>
      </fes:Intersects>
    </fes:Filter>
  </wfs:Query>
</wfs:GetFeature>

Example 7

This example illustrates two queries that fetch Building and SolitaryVegetationObject instances that are within a 500m distance from a given point location.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature service="WFS" version="2.0.0" xmlns:wfs="http://www.opengis.net/wfs/2.0"
  xmlns:gml="http://www.opengis.net/gml" xmlns:fes="http://www.opengis.net/fes/2.0"
  xmlns:bldg="http://www.opengis.net/citygml/building/2.0"
  xmlns:veg="http://www.opengis.net/citygml/vegetation/2.0">
  <wfs:Query typeNames="bldg:Building" handle="q01">
    <fes:Filter>
      <fes:DWithin>
        <fes:ValueReference>gml:boundedBy</fes:ValueReference>
        <gml:Point srsName="http://www.opengis.net/def/crs/epsg/0/4326">
          <gml:pos>13.3068144 52.5096392</gml:pos>
        </gml:Point>
        <fes:Distance uom="m">500</fes:Distance>
      </fes:DWithin>
    </fes:Filter>
  </wfs:Query>
  <wfs:Query typeNames="veg:SolitaryVegetationObject" handle="q02">
    <fes:Filter>
      <fes:DWithin>
        <fes:ValueReference>gml:boundedBy</fes:ValueReference>
        <gml:Point srsName="http://www.opengis.net/def/crs/epsg/0/4326">
          <gml:pos>13.3068144 52.5096392</gml:pos>
        </gml:Point>
        <fes:Distance uom="km">0.5</fes:Distance>
      </fes:DWithin>
    </fes:Filter>
  </wfs:Query>
</wfs:GetFeature>

The two ad-hoc queries can also be expressed as KVP-encoded request. The parameter values for each query have to be delimited using parentheses.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.0&
REQUEST=GetFeature&
TYPENAMES=(bldg:Building)(veg:SolitaryVegetationObject)&
FILTER=(<fes:Filter>
        <fes:DWithin>
        <fes:ValueReference>gml:boundedBy</fes:ValueReference>
        <gml:Point srsName="http://www.opengis.net/def/crs/epsg/0/4326">
        <gml:pos>13.3068144 52.5096392</gml:pos>
        </gml:Point>
        <fes:Distance uom="m">500</fes:Distance>
        </fes:DWithin>
        </fes:Filter>)
       (<fes:Filter>
        <fes:DWithin>
        <fes:ValueReference>gml:boundedBy</fes:ValueReference>
        <gml:Point srsName="http://www.opengis.net/def/crs/epsg/0/4326">
        <gml:pos>13.3068144 52.5096392</gml:pos>
        </gml:Point>
        <fes:Distance uom="km">0.5</fes:Distance>
        </fes:DWithin>
        </fes:Filter>)

Example 8

The listing below exemplifies the use of result paging. It shows an excerpt of the response document to a GetFeature operation requesting buildings from the 3DCityDB. Because the request uses the parameter count=10, only the first 10 of the 312 matching buildings are returned. The client has to invoke the next URL to retrieve the next 10 buildings.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:FeatureCollection xmlns:xAL="urn:oasis:names:tc:ciq:xsdschema:xAL:2.0" xmlns:gml="http://www.opengis.net/gml"
  xmlns:bldg="http://www.opengis.net/citygml/building/2.0" xmlns:wfs="http://www.opengis.net/wfs/2.0"
  timeStamp="2020-03-11T10:12:42" numberMatched="312" numberReturned="10"
  next="http://some.url.com/citydb-wfs/wfs?pageId=58a5bffa8c416be7-48d3fc698001d622-107643930c40bdc4">
  <wfs:member>
    <bldg:Building gml:id="BLDG_0003000a000afdae">
      ...
    </bldg:Building>
  </wfs:member>
</wfs:GetFeature>

GetPropertyValue

The GetPropertyValue operation enables a client to query the value of a property of one or more feature instances in the 3D City Database. Similar to the GetFeature operation, the set of feature instances can be restricted using filter expressions.

The example below shows a simple GetPropertyValue request.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetPropertyValue service="WFS" version="2.0.0" xmlns:wfs="http://www.opengis.net/wfs/2.0"
  xmlns:fes="http://www.opengis.net/fes/2.0" xmlns:bldg="http://www.opengis.net/citygml/building/2.0"
  valueReference="bldg:measuredHeight">
  <wfs:Query typeNames="bldg:Building">
    <fes:Filter>
      <fes:ResourceId rid="ID_0815"/>
    </fes:Filter>
  </wfs:Query>
</wfs:GetPropertyValue>

The name of the property whose value shall be presented to the client is passed to the GetPropertyValue operation via the valueReference attribute. The valueReference uses an XPath expression to identify the feature property. Therefore, also a part of the value of a complex feature property or even properties of nested features can be requested. Like with the GetFeature operation, the requested feature type is given as value of the typeNames attribute on the <wfs:Query> element. The client has to ensure to declare and use the correct CityGML namespaces (see Section 7.4.1.4).

As response to the above request, the WFS will send a <wfs:ValueCollection> document containing the value of the bldg:measuredHeight property of the building with gml:id ID_0815.

In addition to the valueReference attribute, the GetPropertyValue operation may take the same XML attributes as the GetFeature operation as discussed in Section 7.4.4. Please also refer to this chapter for the encoding of ad-hoc queries (<wfs:Query>) and stored queries (<wfs:StoredQuery>) within a GetPropertyValue operation.

Supported XML attributes of a GetPropertyValue operation. (O = optional, M = mandatory)
XML attribute
O / M
Default value
Description
valueReference
M

XPath expression identifying the feature property to be requested.

The GetPropertyValue operation is also available as KVP-encoded request. The following snippet illustrates the KVP encoding of the above request.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.0&
REQUEST=GetPropertyValue&
TYPENAMES=bldg:Building&
VALUEREFERENCE=bldg:measuredHeight&
RESOURCEID=ID_0815

The available KVP parameters are identical to those of a GetFeature request (cf. Section 7.4.1.4). The VALUEREFERENCE has to be provided in addition.

Supported KVP parameters of a GetFeature operation. (O = optional, M = mandatory)
KVP parameter
O / M
Default value
Description
VALUEREFERENCE
M

see above

Ensure to use proper XML namespaces for feature type names, property names and XML-encoded filter expressions. XML namespaces and their prefixes can be specified using the NAMESPACES attribute. However, the 3DCityDB WFS can correctly deal with the default CityGML prefixes. An additional definition via the NAMESPACES attribute is therefore obsolete when using the default prefixes.

Example 1

This example shows the usage of the GetPropertyValue operation to query the names of all gen:stringAttribute properties associated with a specific Building object. The Building is identified by through a ResourceId filter.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetPropertyValue service="WFS" version="2.0.0" xmlns:wfs="http://www.opengis.net/wfs/2.0"
  xmlns:fes="http://www.opengis.net/fes/2.0" xmlns:bldg="http://www.opengis.net/citygml/building/2.0"
  xmlns:gen="http://www.opengis.net/citygml/generics/2.0"
  valueReference="gen:stringAttribute/@gen:name">
  <wfs:Query typeNames="bldg:Building">
    <fes:Filter>
      <fes:ResourceId rid="ID_0815"/>
    </fes:Filter>
  </wfs:Query>
</wfs:GetPropertyValue>

Example 2

The XPath expression of the valueReference may also point to the property of a nested feature. The following request retrieves the value of the generic pv_class property used to store the photovoltaic suitability class of a RoofSurface of the same building. The RoofSurface is identified within the XPath expression via its gml:id ID_ROOF_01. The list of possible gml:id values might be the result of a preceding GetPropertyValue operation.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetPropertyValue service="WFS" version="2.0.0" xmlns:wfs="http://www.opengis.net/wfs/2.0"
  xmlns:fes="http://www.opengis.net/fes/2.0" xmlns:bldg="http://www.opengis.net/citygml/building/2.0"
  xmlns:gen="http://www.opengis.net/citygml/generics/2.0"
  valueReference="bldg:boundedBy/bldg:RoofSurface[@gml:id='ID_ROOF_01']/gen:intAttribute[@gen:name='pv_class']/gen:value">
  <wfs:Query typeNames="bldg:Building">
    <fes:Filter>
      <fes:ResourceId rid="ID_0815"/>
    </fes:Filter>
  </wfs:Query>
</wfs:GetPropertyValue>

The equivalent KVP-encoded request is shown below. Note that the URL encoding of the XPath expression has been omitted for the sake of clarity.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.0&
REQUEST=GetPropertyValue&
TYPENAMES=bldg:Building&
VALUEREFERENCE=bldg:boundedBy/bldg:RoofSurface[@gml:id='ID_ROOF_01']/gen:intAttribute[@gen:name='pv_class']/gen:value&
RESOURCEID=ID_0815

The URL encoding of the above XPath expression results in the following string.

VALUEREFERENCE=bldg%3AboundedBy%2Fbldg%3ARoofSurface%5B%40gml%3Aid%3D%27ID_ROOF_01%27%5D%2Fgen%3A
intAttribute%5B%40gen%3Aname%3D%27pv_class%27%5D%2Fgen%3Avalue

ListStoredQueries

Since version 2.0 of the WFS standard, a WFS server is supposed to manage predefined and parameterized feature query expressions (so called stored queries) that are stored by the server and that can be repeatedly invoked by the client using different parameter values. Stored queries hide the complexity of the underlying query expression from the client since all the client needs to know is the unique identifier of the stored query as well as the names and types of the parameters in order to invoke the operation. For example, the stored query is referenced by its identifier in a GetFeature operation (see Section 7.4.4)

The ListStoredQuery operation is meant to provide the list of stored queries that is offered by the WFS server. The response document contains the unique identifier for each stored query which can then be used in a subsequent DescribeStoredQuery operation to receive the details of a specific stored query form the WFS server. The following listing presents an example ListStoredQuery operation.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:ListStoredQueries service="WFS" version="2.0.0"
 xmlns:wfs="http://www.opengis.net/wfs/2.0"/>

The ListStoredQuery operation may take the following XML attributes as parameters.

Supported XML attributes of a ListStoredQuery operation. (O = optional, M = mandatory)
XML attribute
O / M
Default value
Description
service
M
WFS (fixed)
The service attribute indicates the
service type. The value “WFS” is fixed.
version
M
2.0.x
The service attribute indicates the
The version of the WFS Interface
Standard to be used in the
communication.
handle
O

The handle parameter allows a client to
associate a mnemonic name with the
request that will be used in exception
reports.

The corresponding KVP-encoded request is shown below.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.0&
REQUEST=ListStoredQueries

The following KVP parameters can be used when invoking the ListStoredQueries operation.

Supported KVP parameters of a ListStoredQuery operation. (O = optional, M = mandatory)
KVP parameter
O / M
Default value
Description
SERVICE
M
WFS (fixed)
see above
VERSION
M
2.0.x
see above

DescribeStoredQuery

The DescribeStoredQuery operation is used to provide the details of one or more stored queries offered by the server. The following listing exemplifies a DescribeStoredQuery request.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:DescribeStoredQueries service="WFS" version="2.0.0"
 xmlns:wfs="http://www.opengis.net/wfs/2.0">
  <wfs:StoredQueryId>http://www.opengis.net/def/query/OGC-WFS/0/GetFeatureById</wfs:StoredQueryId>
</wfs:DescribeStoredQueries>

The <wfs:StoredQueryId> child element provides the unique identifier of the stored query (see ListStoredQuery operation in Section 7.4.6). By providing more than on unique identifier through multiple <wfs:StoredQueryId> elements, the descriptions of separate stored queries can be requested in a single DescribeStoredQuery operation. If the <wfs:StoredQueryId> element is omitted, a description of all stored queries available at the WFS server is returned to the client.

The above request will produce a response similar to the following listing.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<wfs:DescribeStoredQueriesResponse
 xmlns:fes="http://www.opengis.net/fes/2.0"
 xmlns:xs="http://www.w3.org/2001/XMLSchema"
 xmlns:wfs="http://www.opengis.net/wfs/2.0">
  <wfs:StoredQueryDescription id="http://www.opengis.net/def/query/OGC-WFS/0/GetFeatureById">
    <wfs:Title xml:lang="en">Get feature by identifier</wfs:Title>
    <wfs:Abstract xml:lang="en">Retrieves a feature by its gml:id.</wfs:Abstract>
    <wfs:Parameter name="id" type="xs:string">
      <wfs:Title xml:lang="en">Identifier</wfs:Title>
      <wfs:Abstract xml:lang="en">The gml:id of the feature to be retrieved.</wfs:Abstract>
    </wfs:Parameter>
    <wfs:QueryExpressionText returnFeatureTypes=""
     language="urn:ogc:def:queryLanguage:OGC-WFS::WFS_QueryExpression"
     isPrivate="false">
      <wfs:Query typeNames="schema-element(core:_CityObject)">
        <fes:Filter>
          <fes:ResourceId rid="${id}"/>
        </fes:Filter>
      </wfs:Query>
    </wfs:QueryExpressionText>
  </wfs:StoredQueryDescription>
</wfs:DescribeStoredQueriesResponse>

Every WFS implementation must at minimum offer the GetFeatureById stored query having the unique identifier http://www.opengis.net/def/query/OGC-WFS/0/GetFeatureById as shown above. This stored query takes a single parameter id of type xs:string and returns zero or exactly one feature whose resource identifier matches the id value. For the 3D City Database WFS, the id value is evaluated against the gml:id of each feature in the database to find a match.

The returnFeatureTypes attribute lists the feature types that may be returned by a stored query. Note that this string is empty for the the GetFeatureById query. Consequently, the query will return a feature instance of all advertised feature types if its gml:id matches. The set of advertised feature types can be influenced in the config.xml settings file.

The DescribeStoredQuery operation allows the following XML attributes.

Supported XML attributes of a DescribeStoredQuery operation. (O = optional, M = mandatory)
XML attribute
O / M
Default value
Description
service
M
WFS (fixed)
The service attribute indicates the
service type. The value “WFS” is fixed.
version
M
2.0.x
The version of the WFS Interface
Standard to be used in the
communication.
handle
O

The handle parameter allows a client to
associate a mnemonic name with the
request that will be used in exception
reports.

A KVP-encoded DescribeStoredQueries request is shown below.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.2&
REQUEST=DescribeStoredQueries&
STOREDQUERY_ID=http%3A%2F%2Fwww.opengis.net%2Fdef%2Fquery%2FOGC-WFS%2F0%2FGetFeatureById

The supported KVP parameters are listed in the following table.

Supported KVP parameters of a DescribeStoredQuery operation. (O = optional, M = mandatory)
KVP parameter
O / M
Default value
Description
SERVICE
M
WFS (fixed)
see above
VERSION
M
2.0.x
see above
STOREDQUERY_ID
O

A comma-separated list of stored query
identifiers to describe.

CreateStoredQuery

The CreateStoredQuery enables you to define and create your own server-side stored queries. Once the stored query has been created, it is immediately available for use in requests (e.g. GetFeature oder GetPropertyValue operations) and can be queried through the ListStoredQueries und DescribeStoredQueries operations.

The following listing shows a simple example to create a stored query for requesting Bridge features using a polygon-based spatial filter.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:CreateStoredQuery service="WFS" version="2.0.0"
  xmlns:wfs="http://www.opengis.net/wfs/2.0" xmlns:fes="http://www.opengis.net/fes/2.0"
  xmlns:gml="http://www.opengis.net/gml"  xmlns:brid="http://www.opengis.net/citygml/bridge/2.0">
  <wfs:StoredQueryDefinition id="urn:StoredQueries:BridgesInPolygon">
    <wfs:Title>Bridges In Polygon</wfs:Title>
    <wfs:Abstract>Find all the bridges in a polygon.</wfs:Abstract>
    <wfs:Parameter name="AreaOfInterest" type="gml:Polygon"/>
    <wfs:QueryExpressionText
      returnFeatureTypes="brid:Bridge"
      language="urn:ogc:def:queryLanguage:OGC-WFS::WFS_QueryExpression"
      isPrivate="false">
      <wfs:Query typeNames="brid:Bridge">
        <fes:Filter>
          <fes:Within>
            <fes:ValueReference>bldg:boundedBy</fes:ValueReference>
            ${AreaOfInterest}
          </fes:Within>
        </fes:Filter>
      </wfs:Query>
    </wfs:QueryExpressionText>
  </wfs:StoredQueryDefinition>
</wfs:CreateStoredQuery>

The details of the stored query are defined within the <wfs:StoredQueryDefinition> element. The mandatory id attribute provides the identifier of the stored query, which is later used to invoke the query (cf. Section 7.4.4). The WFS server ensures that the identifier is unique and not in use by any other stored query. The metadata attributes <wfs:Title> and <wfs:Abstract> allow for providing a human-readable description of the stored query. The list of parameters that have to be passed by a client when invoking the stored query is determined by one or more <wfs:Parameter> elements. The name attribute on a <wfs:Parameter> element defines the parameter name, whereas the type attribute defines its data type.

The query definition follows in the <wfs:QueryExpressionText> element. The returnFeatureTypes attribute must contain the qualified XML name of the CityGML feature type that is returned by the stored query. Note that this attribute may as well contain a list of feature type names or even be empty in case the response may comprise all feature types that are advertised by the WFS service (like for the predefined stored query GetFeatureById). The language used to define the query expression is restricted to urn:ogc:def:queryLanguage:OGC-WFS::WFS_QueryExpression for this version of the WFS. The isPrivate attribute lets you decide whether the actual query expression should be visible to clients in a response to a DescribeStoredQueries request.

The actual query expression is provided by one or more <wfs:Query> elements. The parameters defined within the <wfs:StoredQueryDefinition> element are referenced in this query expression using the notation ${parameter_name}. For more details on the CreateStoredQuery operation, please refer to the official WFS specification document.

The CreateStoredQuery operation may take the following XML attributes.

Supported XML attributes of a CreateStoredQuery operation. (O = optional, M = mandatory)
XML attribute
O / M
Default value
Description
service
M
WFS (fixed)
The service attribute indicates the service type. The value “WFS” is fixed.
version
M
2.0.x
The version of the WFS Interface Standard to be used in the communication.
handle
O

The handle parameter allows a client to associate a mnemonic name with the request that will be used in exception reports.

Note

The WFS specification does not define a KVP encoding for the CreateStoredQuery operation.

DropStoredQuery

The DropStoredQuery operation allows previously created stored queries to be dropped from the WFS server. The request simply accepts the identifier of the stored query to drop. The stored query identifier shall be encoded in XML using the id attribute on the <wfs:DropStoredQuery> element.

To drop the stored query urn:StoredQueries:BridgesInPolygon created in Section 7.4.8, simply use the DropStoredQuery operation as shown in the following example.

<?xml version="1.0" encoding="UTF-8"?>
<wfs:DropStoredQuery xmlns:wfs="http://www.opengis.net/wfs/2.0" service="WFS" version="2.0.0"
   id="urn:StoredQueries:BridgesInPolygon"/>

The following XML attributes are available for the DropStoredQuery operation.

Supported XML attributes of a DropStoredQuery operation. (O = optional, M = mandatory)
XML attribute
O / M
Default value
Description
service
M
WFS (fixed)
The service attribute indicates the service type. The value “WFS” is fixed.
version
M
2.0.x
The version of the WFS Interface Standard to be used in the communication.
handle
O

The handle parameter allows a client to associate a mnemonic name with the request that will be used in exception reports.
id
M

The identifier of the stored query to be dropped.

The corresponding KVP-encoded request is shown below.

http[s]://[host][:port]/[context_path]/wfs?
SERVICE=WFS&
VERSION=2.0.0&
REQUEST=DropStoredQuery&
STOREDQUERY_ID=urn:StoredQueries:BridgesInPolygon

The following KVP parameters can be used when invoking the DropStoredQuery operation.

Supported KVP parameters of a DropStoredQuery operation. (O = optional, M = mandatory)
KVP parameter
O / M
Default value
Description
SERVICE
M
WFS (fixed)
see above
VERSION
M
2.0.x
see above
STOREDQUERY_ID
M

see above

Web-based WFS client

The 3D City Database WFS is shipped with a simple web-based client that is mainly meant to test the functionality of the server. The client is automatically installed with the server and is available at the following URL (cf. Section 7.4.1.2 for details):

http[s]://[host][:port]/[context_path]/wfsclient

The screenshot below shows the user interface of the client rendered in a standard web browser.

_images/wfs_web_client_fig.png

Web-based WFS client.

The user interface consists of two text fields. A user simply enters the XML-encoded operation request that shall be sent to the server into the upper text field named WFS Request [1]. Clicking on the Send button forwards the request to the server. As soon as the response document is received from the WFS server, it is rendered in the lower text field named WFS Result.

Caution

Avoid sending requests through this client that might potentially result in a large number of city objects contained in the response document. Otherwise the available main memory of the web browser is quickly exhausted when trying to display the response document, which renders the browser non-responsive or might even lead to a program crash.

You may want to use the count attribute on the GetFeature request in order to limit the maximum number of features to be contained in the response document. Alternatively, you can specify the “hits” value for the resultType attribute in order to only receive the number of features matching your query instead of the features themselves (cf. Section 7.4.4).

Web Feature Service using Docker

The 3DCityDB Web Feature Service (WFS) Docker image exposes the capabilities of the Web Feature Service for dockerized applications and workflows. Using the WFS Docker you can expose the features stored in a 3DCityDB instance through an OGC WFS interface offering a rich set of features like advanced filter capabilities. For a basic configuration just the connection credentials of the 3DCityDB (CITYDB_* environment variables) have to be specified. All WFS functionalities are supported by the images.

Synopsis

docker run --name wfs [-d] -p 8080:8080 \
    [-e CITYDB_TYPE=postgresql|oracle] \
    [-e CITYDB_HOST=the.host.de] \
    [-e CITYDB_PORT=thePort] \
    [-e CITYDB_NAME=theDBName] \
    [-e CITYDB_SCHEMA=theCityDBSchemaName] \
    [-e CITYDB_USERNAME=theUsername] \
    [-e CITYDB_PASSWORD=theSecretPass] \
    [-e WFS_CONTEXT_PATH=wfs-context-path] \
    [-e WFS_ADE_EXTENSIONS_PATH=/path/to/ade-extensions/] \
    [-e WFS_CONFIG_FILE=/path/to/config.xml] \
    [-v /my/data/config.xml:/path/to/config.xml] \
  3dcitydb/wfs[:TAG]

When running containers with default settings, the WFS will listen at following URL. Note that the web root is used as context path in this case.

http[s]://[host][:port]/wfs

The Web-based client is available here:

http[s]://[host][:port]/wfsclient

Image variants and versions

The WFS Docker images are based on the official Apache Tomcat images and are available as Debian and Alpine Linux variants. Table 7.29 gives an overview on the images available. Currently, Tomcat 9 images are used as base images for the WFS.

The edge images are automatically built and published on every push to the master branch of the 3DCityDB WFS Github repository using the latest stable version of the base images. The latest and release image versions are only built when a new release is published on Github. The latest tag will point to the most recent release version.

3DCityDB WFS Docker image variants and versions
Tag Debian Alpine
edge deb-build-edge deb-size-edge alp-build-edge alp-size-edge
latest deb-size-latest alp-size-latest
5.0.0 deb-size-v5.0.0 alp-size-v5.0.0

The images are available on 3DCityDB DockerHub and can be pulled like this:

docker pull 3dcitydb/wfs:TAG

The image tag is composed of the WFS version and the image variant. Debian is the default image variant, where no image variant is appended to the tag. For the Alpine Linux images -alpine is appended. The full list of available tags can be found on DockerHub. Here are some examples of full image tags:

docker pull 3dcitydb/wfs:edge
docker pull 3dcitydb/wfs:edge-alpine
docker pull 3dcitydb/wfs:latest-alpine
docker pull 3dcitydb/wfs:5.0.0
docker pull 3dcitydb/wfs:5.0.0-alpine

Usage and configuration

A 3DCityDB WFS Docker container is configured using environment variables and a WFS config.xml file. The easiest way of using the WFS Docker is to use the default config.xml shipped inside the image and by setting the database connection details and/or the web context path through environment variables. The default config file exposes all filter capabilities and feature types from the connected database to the WFS and should be suitable for most situations.

If you require more specific settings, get a copy of default-config.xml and build your own config file (see Configuring the WFS). Mount your custom config file to the container at runtime (see docker run docs). To apply the custom config file set the WFS_CONFIG_FILE option.

All available environment variables are listed and described below.

Note

The environment variables are optional. If you do not provide them, make sure that your config.xml file contains all settings (including database connection details) required to run the service. Otherwise, the WFS will throw error messages when starting the container. If you use environment variables though, they always take precedence over corresponding settings in the config.xml file. Thus, you can create custom config files and use them with different databases by overwriting the settings with the environment variables.

CITYDB_TYPE=<postgresql|oracle>

The type of the 3DCityDB to connect to. postgresql is the default.

CITYDB_HOST=<hostname or ip>

Name of the host or IP address on which the 3DCityDB is running.

CITYDB_PORT=<port>

Port of the 3DCityDB to connect to. Default is 5432 for PostgreSQL and 1521 for Oracle, depending on the setting of CITYDB_TYPE.

CITYDB_NAME=<dbName>

Name of the 3DCityDB database to connect to.

CITYDB_SCHEMA=<citydb>

Schema to use when connecting to the 3DCityDB. The defaults are citydb for PostgreSQL, username for Oracle, depending on the setting of CITYDB_TYPE.

CITYDB_USERNAME=<username>

Username to use when connecting to the 3DCityDB

CITYDB_PASSWORD=<thePassword>

Password to use when connecting to the 3DCityDB

WFS_CONFIG_FILE=</path/to/custom/config.xml>

Path of the WFS config file to use. See above how to create and use a custom config file.

WFS_CONTEXT_PATH=<wfs-context-path>

The URL subpath where the WFS is served (see Section 7.4.1.2). The default value is ROOT, for serving from the web root. Note: Nested paths are currently not supported. For instance, set WFS_CONTEXT_PATH=citydb-wfs to serve from http[s]://my-domain/citydb-wfs/.

WFS_ADE_EXTENSIONS_PATH=</path/to/ade-extension/>

Allows for providing an alternative directory where the WFS service shall search for ADE extensions (default: ade-extensions folder is the WEB-INF directory). The WFS service must have read access to this directory (see Section 7.3 for more details).

Build your own images

3DCityDB WFS images can easily be built on your own. The images support the following build arguments:

BUILDER_IMAGE_TAG=<11.0.12-jdk-slim'>

Tag of the builder base image, https://hub.docker.com/_/openjdk.

RUNTIME_IMAGE_TAG=<9-alpine>

Tag of the runtime image, https://hub.docker.com/_/tomcat.

DEFAULT_CONFIG=</path/to/default/config.xml>

Name of the default config file shall that shall be copied into the image and used by default when running a container. The config file must be located inside the resources/docker folder (default: default-config.xml).

TOMCAT_USER=<tomcat>

Name of the user running the Tomcat service inside the container (default: tomcat). Note that the user is assigned the fixed UID = 1000.

TOMCAT_GROUP=<tomcat>

Name of the group that the user shall be assigned to (default: tomcat). Note that the group is assigned the fixed GID = 1000.

Build process

  1. Clone the WFS Github repository and navigate to the cloned repo:

    git clone https://github.com/3dcitydb/web-feature-service.git
    cd web-feature-service
    
  2. Build the image using docker build:

# Debian variant
docker build . \
  -t 3dcitydb/wfs:edge

# Alpine variant
docker build . \
  -t 3dcitydb/wfs:edge-alpine \
  -f Dockerfile.alpine

Examples

This example shows how to bring up a 3DCityDB WFS with the Importer/Exporter and 3DCityDB Docker images. In this example we are going to provide the LoD3 Railway dataset via WFS and run some example queries.

Database creation and data import

Note

A more detailed example on importing data using the 3DCityDB Docker images is available here.

  1. Download the dataset, create a folder and put the downloaded file in the new folder. In the following we assume the file is at /my/data/Railway_Scene_LoD3.zip.
  2. Create a Docker network and a 3DCityDB Docker container for our dataset:
docker network create citydb-net

docker run -d --name citydb \
  --network citydb-net \
  -e "POSTGRES_PASSWORD=changeMe" \
  -e "SRID=3068" \
3dcitydb/3dcitydb-pg:latest-alpine
  1. Import the dataset using the 3DCityDB Importer/Exporter Docker:
docker run -i -t --rm --name impexp \
    --network citydb-net \
    -v /my/data:/data \
  3dcitydb/impexp:latest-alpine import \
    -H citydb \
    -d postgres \
    -u postgres \
    -p changeMe \
    /data/Railway_Scene_LoD3.zip

WFS configuration and testing

Start a 3DCityDB WFS container. We are going to expose port 8080 to the host system for the service and serve WFS content from /citydb-wfs.

docker run -d --name wfs \
    -p 8080:8080 \
    --network citydb-net \
    -e CITYDB_HOST=citydb \
    -e CITYDB_NAME=postgres \
    -e CITYDB_USERNAME=postgres \
    -e CITYDB_PASSWORD=changeMe \
    -e WFS_CONTEXT_PATH=citydb-wfs \
  3dcitydb/wfs:latest-alpine

Note

The 3DCityDB, Importer/Exporter and WFS Docker containers are attached to the same Docker network citydb-net we created in the beginning. Thus, container names (e.g. citydb) can be use as hostnames for communication between the containers. See Docker network docs for more Docker networking options.

Now the WFS should be up and running. Let’s check if the service started using docker logs:

$ docker logs -n 5 wfs

03-Sep-2021 12:24:14.036 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deploying web application directory [/usr/local/tomcat/webapps/host-manager]
03-Sep-2021 12:24:14.049 INFO [main] org.apache.catalina.startup.HostConfig.deployDirectory Deployment of web application directory [/usr/local/tomcat/webapps/host-manager] has finished in [13] ms
03-Sep-2021 12:24:14.052 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
03-Sep-2021 12:24:14.058 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["ajp-nio-8009"]
03-Sep-2021 12:24:14.061 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 515 ms

If you see output similar to this, the service started successfully.

Get WFS capabilities

The service is listening on port 8080 on our local machine, the Web-based client can be accessed from a browser:

  • WFS service endpoint: http://localhost:8080/citydb-wfs/wfs
  • WFS Web-based client: http://localhost:8080/citydb-wfs/wfsclient

Let’s query the capabilities document to check what our WFS can do. We are going to use curl for this:

serviceURL='http://localhost:8080/citydb-wfs/wfs?'
query='SERVICE=WFS&REQUEST=GetCapabilities'
curl -v "$serviceURL$query"

The capabilities document returned looks like this:

<?xml version="1.0" standalone="yes"?>
<wfs:WFS_Capabilities xmlns:fes="http://www.opengis.net/fes/2.0" xmlns:gml="http://www.opengis.net/gml" xmlns:wtr="http://www.opengis.net/citygml/waterbody/2.0" xmlns:ows="http://www.opengis.net/ows/1.1" xmlns:veg="http://www.opengis.net/citygml/vegetation/2.0" xmlns:tran="http://www.opengis.net/citygml/transportation/2.0" xmlns:dem="http://www.opengis.net/citygml/relief/2.0" xmlns:grp="http://www.opengis.net/citygml/cityobjectgroup/2.0" xmlns:bldg="http://www.opengis.net/citygml/building/2.0" xmlns:wfs="http://www.opengis.net/wfs/2.0" xmlns:tun="http://www.opengis.net/citygml/tunnel/2.0" xmlns:frn="http://www.opengis.net/citygml/cityfurniture/2.0" xmlns:gen="http://www.opengis.net/citygml/generics/2.0" xmlns:brid="http://www.opengis.net/citygml/bridge/2.0" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:luse="http://www.opengis.net/citygml/landuse/2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.opengis.net/wfs/2.0 http://schemas.opengis.net/wfs/2.0/wfs.xsd" version="2.0.0">
  <ows:ServiceIdentification>
    <ows:Title>3DCityDB Web Feature Service</ows:Title>
    <ows:ServiceType>WFS</ows:ServiceType>
    <ows:ServiceTypeVersion>2.0.0</ows:ServiceTypeVersion>
  </ows:ServiceIdentification>
  <ows:ServiceProvider>
    <ows:ProviderName/>
    <ows:ServiceContact/>
  </ows:ServiceProvider>
  <ows:OperationsMetadata>
    <ows:Operation name="GetCapabilities">
      <ows:DCP>
        <ows:HTTP>
          <ows:Get xlink:href="http://localhost:8080/citydb-wfs/wfs"/>
          <ows:Post xlink:href="http://localhost:8080/citydb-wfs/wfs"/>

<!-- ... -->
<!-- ... -->

      </fes:SpatialOperators>
    </fes:Spatial_Capabilities>
  </fes:Filter_Capabilities>
</wfs:WFS_Capabilities

Example query: Feature by ID

Now let’s query a feature by ID (GMLID_BUI46739_1739_10911) from the WFS.

The WFS request for this looks like this and is stored in request.xml:

request.xml
<?xml version="1.0" encoding="UTF-8"?>
<wfs:GetFeature service="WFS" version="2.0.0" xmlns:wfs="http://www.opengis.net/wfs/2.0">
  <wfs:StoredQuery id="http://www.opengis.net/def/query/OGC-WFS/0/GetFeatureById">
    <wfs:Parameter name="id">GMLID_BUI46739_1739_10911</wfs:Parameter>
  </wfs:StoredQuery>
</wfs:GetFeature>

Let’s send a POST request with the content from request.xml to the WFS and and write the output to building.gml:

curl -v \
  -X POST \
  -H 'Content-Type: text/xml' \
  -d "@request.xml" \
  "http://localhost:8080/citydb-wfs/wfs" > building.gml

The shortened and beautified content of building.gml looks like this:

<?xml version="1.0" standalone="yes"?>
<bldg:Building gml:id="GMLID_BUI46739_1739_10911">
  <gml:description>Simple Chapel with a recess/loggia</gml:description>
  <gml:name>Chapel KIT/KHH-1</gml:name>
  <gml:boundedBy>
    <gml:Envelope srsName="urn:ogc:def:crs:EPSG::3068" srsDimension="3">
      <gml:lowerCorner>-299.374655062533 575.1129259060015 103.648365247638</gml:lowerCorner>
      <gml:upperCorner>-272.47917424008 596.1169211194645 121.04746928772363</gml:upperCorner>
    </gml:Envelope>
  </gml:boundedBy>
  <core:creationDate>2021-09-03</core:creationDate>
  <core:relativeToTerrain>entirelyAboveTerrain</core:relativeToTerrain>
  <bldg:outerBuildingInstallation>
    <bldg:BuildingInstallation gml:id="UUID_071439a3-5cd7-4ace-b0cb-4cedec5a6540">
      <gml:name>Tower</gml:name>
      <core:creationDate>2021-09-03</core:creationDate>
      <core:relativeToTerrain>entirelyAboveTerrain</core:relativeToTerrain>
      <bldg:function>1040</bldg:function>
      <bldg:lod3Geometry>
        <gml:MultiSurface gml:id="UUID_87c65640-96ad-42d2-aa2d-367245f4a865">

<!-- ... -->
<!-- ... -->

Shutdown and cleanup

When the services and network are no longer required, they can be removed:

docker rm citydb wfs
docker network remove citydb-net

Appendix

This section contains some background information about the 3DCityDB contributors and how they use the software in research and daily business. Visit the official 3DCityDB homepage for news updates or follow us on our Twitter channel.

Contributors

Active participants in development

Name
Institution
Email
Thomas H. Kolbe
Son H. Nguyen
Kanishk Chaturvedi
Bruno Willenborg
Andreas Donaubauer
Chair of Geoinformatics,
Technische Universität München
Claus Nagel
Zhihang Yao
Virtual City Systems, Berlin
Harald Schulz
Philipp Willkomm
György Hudra
M.O.S.S. Computer Grafik Systeme GmbH,
Taufkirchen, Germany
Felix Kunde
Beuth University of Applied Sciences

Participants in earlier developments

The 3D City Database Version 4.0 and its tools are based on earlier versions. During the development phase 2006-2012 at the Institute for Geodesy and Geoinformation Science, TU Berlin, the following individuals contributed to the development:

Name
Institution
Email
Thomas H. Kolbe
Claus Nagel
Javier Herreruela
Gerhard König
Alexandra Lorenz
(geb. Stadler)
Babak Naderi
Institute for Geodesy and Geoinformation
Science, Technische Universität Berlin

Felix Kunde
University of Potsdam

During the development phase 2004-2006 at the Institute for Cartography and Geo­information, University of Bonn, the following individuals contributed to the development:

Name
Institution
Email
Thomas H. Kolbe
Lutz Plümer
Gerhard Gröger
Viktor Stroh
Jörg Schmittwilken
Institute for Cartography and Geoinformation,
University of Bonn

Andreas Poth
Ugo Taddei
lat/lon GmbH, Bonn

3DCityDB @ TU München

The Chair of Geoinformatics at Technische Universität München (TUM) took over the further development of the 3D City Database from TU Berlin (TUB) when Prof. Kolbe moved from TUB to TUM in 2012. 3DCityDB is being used at TUM in teaching courses on spatial databases and 3D city modeling, in student projects and master theses, and in many past and ongoing research projects.

Interactive Cloud-based 3D Webclient

Besides the Open Source 3DCityDB-Web-Map-Client as described in Section 6 the Chair of Geoinformatics has also developed a “Professional Version” of the interactive 3D web client. This version links 3D visualization models exported in KML/glTF from 3DCityDB with table data exported using the 3DCityDB Spreadsheet Generator and allows viewing, editing, and querying objects and their thematic data (cf. [HeNK2012]; [YSKK2012]; [ChYK2015]). The configuration of a 3D webclient project (information about each layer, thematic data, preferences, spatial bookmarks) is also stored in the Cloud as a Google Spreadsheet. The following image shows a screenshot of a tool created by TUM for the Energy Atlas Berlin that is based on the “3D Webclient Professional”. It estimates building energy demands based on the German standard DIN 18599 and the 3D building models in CityGML and allows to interactively explore retrofitting potentials for single or sets of buildings (cf. [KaKo2014]). Thematic data are stored in Google Spreadsheets, where spreadsheet formulas are employed to implement ad-hoc computation of energy values and their changes according to retrofit measures. Also the costs of the retrofitting measures are estimated for each building individually.

_images/appendix_tum_webclient.jpg

Research Projects in which 3DCityDB is being used

Semantic 3D city modeling, city system modeling, and indoor navigation are major research fields of the Chair of Geoinformatics at TUM. We have been driving the international development of CityGML and IndoorGML within the OGC. We are partners in and/or coordinators of projects on Smart Cities, Sustainable Urban Development, and Strategic Energy Planning funded by the Climate-KIC of the European Institute of Innovation & Technology (EIT). Projects using 3DCityDB are: Energy Atlas Berlin, Neighborhood Demonstrators, Smart Sustainable Districts, Modeling City Systems, and Smart District Data Infrastructure. 3DCityDB has also been used in the OGC Future Cities Pilot, and ’ 3D Tracks- Collaborative Subway Track Planning in Multi-Scale 3D City and Building Models’ [BKDS2015] funded by the German Science Foundation (DFG) and was used in projects on deriving 3D DLM from 2D DLM and DTM/DSM [FMWD2018].

Current and future work on 3DCityDB

The team at the Chair of Geoinformatics is currently working on the following tools and extensions to 3DCityDB. Most of them will be made available as Open Source software within the 3DCityDB repository as soon as they are finished and tested:

Support of the Dynamizer ADE: Dynamizers extend CityGML to support the representation and exchange of time-varying attribute values for all CityGML feature properties using timeseries. Support in 3DCityDB is facilitated by 1) provision of the Java library for importing and exporting CityGML Dynamizer ADE contents, and 2) provision of a new web service, the so-called InterSensor Service, which will give access to the timeseries data stored in the 3DCityDB according to the OGC Sensor Web Enablement standards.

Update Manager: This tool will provide a check-out / check-in functionality for parts of stored 3D city models for the purpose of editing and updating. It will automatically detect changes made on the previously exported (checked-out) CityGML dataset and create WFS as well as direct database transactions that will update the 3DCityDB contents according to the identified changes (check-in).

Solar potential analysis: This tool computes the solar energy of direct and diffuse irradiation on building walls and roofs. The computation considers shadow casting by buildings, vegetation, a Digital Surface Model and the Digital Terrain Model. The monthly energy and irradiation values as well as the sky view factors are attached as generic attributes to wall and roof surface objects and in aggregated form to buildings. The software is implemented in Java and directly connects to the 3DCityDB. It has been employed to estimate the solar potentials in the official Energy Atlas of the city of Helsinki, Finland.

3DCityDB @ Virtual City Systems

Virtual City Systems has successfully employed the 3D City Database in customer projects worldwide and also funded its development. With the open source database at the core, Virtual City Systems offers a 3D Spatial Data Infrastructure solution for the management, distribution, maintenance and visualization of massive 3D geo data. Virtual City Systems has a leading role in the development of the 3D City Database and offers a branded version called VC Database to answer customer demands and to provide support and maintenance.

VC Database

The VC Database provides enhanced database functionality as well as plugins for the Importer/Exporter tool that support workflows for maintaining and updating the 3D city model content. Main features are:

  • Data continuation and object histories
    The VC Database offers tools and additional database structures to manage updates and multiple versions of city objects in the database as well as to automatically track changes of city objects. Both aspects are relevant in the context of the continuation of city objects and entire city models. The VC Database offers the five operations Insert, Delete, Terminate, Replace, and Update that enable you to easily keep your 3D city model up-to-date and to get access to previous states of the model.
  • Integration of additional LoDs against existing city objects in the database
    This plugin allows for integrating city objects from an external data source with existing city objects stored in the database. The candidate objects are identified with the database objects based on thematic and spatial checks. Therefore, data inconsistency can easily be spotted and analyzed before an import. If an integration is performed, exiting LoDs are replaced and newly introduced LoDs are attached to the existing objects. Moreover, appearance information can be integrated without replacing the geometry.
  • Deletion and termination of city objects
    The VC Database provides a user-friendly Importer/Exporter plugin for deleting and terminating city objects using spatial and thematic filter criteria as well as delete lists.
  • Transactional Web Feature Service
    Customers of the VC Database already benefit from an OGC-compliant WFS 2.0 implementation that supports transactions as well as comprehensive spatial and thematic queries using the OGC Filter Encoding standard.

The VC Database is fully compliant with the 3D City Database. If features developed for the VC Database have gained enough maturity, Virtual City Systems will introduce them to the open source 3D City Database project (e.g. the WFS interface).

VC Suite – The Digital Geo-Twin Platform

The VC Suite is a modular 3D Spatial Data Infrastructure solution to store, manage, distribute and visualize 3D geo data. Core components are the VC Database and its OGC WFS interface for accessing and editing the data, the VC Warehouse, a data exchange solution running on FME technology that enables users to export 3D city model content from the VC Database into various industry GIS and CAD formats, and the web-based authoring tool VC Publisher for creating high-performance 3D web maps. Based on the Open Source 3D City Database, the VC Suite allows for building a 3D SDI platform for Digital Geo-Twins based on open standards and interfaces.

_images/appendix_vc_suite_components_fig.png

Components of the VC Suite.

Our VC Map web mapping technology offers enhanced GIS functionality beyond pure visualization such as measurements, real-time shadows, viewshed analysis, WFS-based thematic and spatial queries, POI integration, data exports through a VC Warehouse interface, and integration of external data services and sources (e.g., WMS, WFS, GeoJSON, vector tiles) as well as meshes, point clouds and oblique imagery. The 3D web maps are based on the Cesium WebGL virtual globe and therefore can be displayed on modern web browsers and mobile devices such as tablets and smartphones without the need for additional plugins.

_images/appendix_vcmap_berlin_3dcitymodel_fig.png

The Berlin 3D City Model is managed based on our VC Suite. The Berlin Economic atlas shown above is a VC Map application that displays the entire city model and combines the 3D objects with business and POI information, see https://www.businesslocationcenter.de/wab/maps/main/#/.

3DCityDB @ M.O.S.S.

M.O.S.S. Computer Grafik Systeme GmbH is a leading provider of geo topographical data management and processing solutions. Within the M.O.S.S. product suite novaFACTORY, the 3D City Database is used since 2011 as the primary storage container for 3D and CityGML based data. M.O.S.S. as an active development partner within the 3D City Database implementation group drives on the technological progress of the 3D City Database. Within the M.O.S.S. customer projects millions of CityGML objects are imported managed and exported by novaFACTORY and the included 3D City Database. One example is the nationwide database for the german LoD1 building product (LOD-DE) which is based on the 3D City Database. novaFACTORY is also used as a 3D platform within different projects concerning renewable energy topics like building heat demand analysis or solar potential assessment.

_images/appendix_novaFactory_simStadt_fig.png

Example of a 3D building heat demand map for the city of Ludwigsburg created with novaFACTORY 3D within project SimStadt

novaFACTORY at a glance

novaFACTORY is an advanced Spatial Data Management solution for efficient geodata cataloguing, exploitation and dissemination. With novaFACTORY we are leading the way in the full integration of enterprise-wide geospatial data sources which the whole organization can have access to and work from, covering all aspects of

  • Data Import
  • Quality Assurance
  • Data Storage and Management
  • Data Processing and Enrichment
  • Data Dissemination

As applications for geodata have grown, so too has the need to efficiently administer them. Many businesses, whether government departments or private companies, are faced with the complex task of managing geospatial data. The challenge is to allow collaboration across the organization in a meaningful way, from a range of sources and formats located throughout their enterprise.

novaFACTORY is the solution to this challenge. It brings geodata together and eliminates barriers to spatial data usability by automatically uniting disparate data and combining them into one spatial database. novaFACTORY is designed for seamlessly integrating large geographical data sets from many different sources, e.g. topographic maps, digital surface models, aerial photographs or 3D building models.

Within novaFACTORY the module 3D GDI is where the 3D City Database comes into the action.

_images/appendix_novaFactory_workflow_fig.png

novaFACTORY 3D overview and workflow. 3D data management based on 3D City Database

novaFACTORY 3D GDI

The novaFACTORY 3D GDI module is designed for handling and serving 3D city models in CityGML format. It enables the RDBMS based seamless storage and dissemination of 3D city models as well as setting up web services using them. The data is kept within the 3D City Database and can be automatically transferred into an ArcGIS® Geodatabase.

As with all novaFACTORY modules data can be disseminated via an intuitive web interface and via any workstation, in alternatively formats, e.g. CityGML, KML/COLLADA, VRML, 3D Shape, 3D PDF and 3D DXF. Depending on which kind of format is chosen different export parameters can be opted for showing specific object data.

Additional benefit is gained by automatically enhancing the 3D building data. The novaFACTORY 3D GDI module offers a fully integrated solar potential analysis during the export, targeted at the area of interest. 3D data can be visualized directly. Appropriate ArcGIS presentation rules will be generated automatically during the export.

The novaFACTORY 3D GDI module works best in cooperation with the novaFACTORY 3D Pro module for automatic recognition of building roofs from photogrammetric raw data. This raw data will be supplied automatically and the 3D City Database will be updated automatically when production data are approved.

References

[BaFi2008]Barners, M., Finch, E. L. (2008): COLLADA - Digital Asset Schema Release 1.5.0. The Khronos Group Inc., Sony Computer Entertainment Inc, April 2008. Weblink (accessed March 2020): http://www.khronos.org/files/collada_spec_1_5.pdf
[BKDS2015]Borrmann, A., Kolbe, T. H., Donaubauer, A., Steuer, H., Jubierre, J. R., Flurl, M. (2015): Multi-scale geometric-semantic modeling of shield tunnels for GIS and BIM applications. Computer-Aided Civil and Infrastructure Engineering (Vol. 30, No. 4). Weblink (accessed March 2020): http://dx.doi.org/10.1111/mice.12090.
[ChYK2015]Chaturvedi, K., Yao, Z., Kolbe, T. H. (2015): Web-based Exploration of and Interaction with Large and Deeply Structured Semantic 3D City Models using HTML5 and WebGL. In: Proc. of the 35th Annual Conference of the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF), Weblink (accessed March 2020): https://mediatum.ub.tum.de/node?id=1245285
[CGJT1980]Coffman, E.G. Jr., Garey, M. R., Johnson, D.S., Tarjan, R.E. (1980): Performance bounds for level-oriented two-dimensional packing algorithms. In: SIAM Journal on Computing 9 (1980), pp. 801–826.
[DBBF2005]Döllner, J., Buchholz, H., Brodersen, F., Glander, T., Jütterschenke, S., Klimetschek, A. (2005): Smart Buildings – A Concept for Ad-Hoc Creation and Refinement of 3D Building Models. In: Kolbe, T. H., Gröger, G. (eds.): Proceedings of the 1st International Workshop on Next Generation 3D City Models, Bonn, Germany, June 2005, EuroSDR Publications.
[DKLS2006]Döllner, J., Kolbe, T. H., Liecke, F., Sgouros, T., Teichmann, K. (2006): The Virtual 3D City Model of Berlin - Managing, Integrating, and Communicating Complex Urban Information. In: Proceedings of the 25th Urban Data Management Symposium UDMS 2006 in Aalborg, Denmark, May 15-17. Weblink (accessed March 2020): http://mediatum.ub.tum.de/doc/1145759/484057.pdf
[FMWD2018]Fiutak, G.; Marx, C.; Willkomm, P.; Donaubauer, A.; Kolbe, T. H. (2018): Automatisierte Generierung eines digitalen Landschaftsmodells in 3D. PFGK18 - Photogrammetrie - Fernerkundung - Geoinformatik - Kartographie, 37. Jahrestagung in München 2018 (Publikationen der Deutschen Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation (DGPF) e.V. 27), Deutsche Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation e.V., 888-902.
[FVFH1995]Foley, J., van Dam, A,. Feiner, S., Hughes, J. (1995): Computer Graphics: Principles and Practice. Addison Wesley, 2nd Ed.
[Khro2018]glTF - Efficient, Interoperable Transmission of 3D Scenes and Models, Khronos, Weblink (accessed March 2020): https://www.khronos.org/gltf
[GKSS2005]Gröger, G., Kolbe, T. H., Schmittwilken, J., Stroh, V., Plümer, L. (2005): Integrating versions, history and levels-of-detail within a 3D geodatabase. In: Kolbe, T. H., Gröger, G. (eds.): Proceedings of the 1st International Workshop on Next Generation 3D City Models, Bonn, Germany, June 2005, EuroSDR Publications. Weblink (accessed March 2020): https://mediatum.ub.tum.de/doc/1453849/1453849.pdf
[GKCN2008]Gröger G., Kolbe, T. H., Czerwinski, A., Nagel C. (2008): OpenGIS® City Geography Markup Language (CityGML) Encoding Standard, Version 1.0.0. Open Geospatial Consortium, Doc. No. 08-007r1, August 20th. Weblink (accessed March 2020): http://portal.opengeospatial.org/files/?artifact_id=28802
[GKNH2012]Gröger G., Kolbe, T. H., Nagel C., Häfele, K. H. (2012): OpenGIS® City Geography Markup Language (CityGML) Encoding Standard, Version 2.0.0. Open Geospatial Consortium, Doc. No. 12-019, Weblink (accessed March 2020): http://portal.opengeospatial.org/files/?artifact_id=28802
[HeNK2012]Herreruela, J., Nagel, C., Kolbe, T. H. (2012): Value-added Services for 3D City Models using Cloud Computing. In: Löwner, M.-O., Hillen, F., Wohlfahrt, R. (eds.): Geoinformatik 2012 “Mobilität und Umwelt”, Proc. of the Conference Geoinformatik 2012, 28.-30. 3. 2012 in Braunschweig. Weblink (accessed March 2020): http://mediatum.ub.tum.de/doc/1145739/42082.pdf
[Herr2001]Herring, J. (2001): The OpenGIS Abstract Specification, Topic 1: Feature Geometry (ISO 19107 Spatial Schema). OGC Document Number 01-101
[KaKo2014]Kaden, R., Kolbe, T. H. (2014): Simulation-Based Total Energy Demand Estimation of Buildings using Semantic 3D City Models. International Journal of 3-D Information Modeling, 3(2), 35-53, April-June 2014. Weblink (accessed March 2020): http://dx.doi.org/10.4018/ij3dim.2014040103
[KoGr2003]Kolbe, T. H., Gröger, G. (2003): Towards unified 3D city models. In Schiewe, J., Hahn, M., Madden, M., Sester, M. (eds.): Proceedings of the ISPRS Comm. IV Joint Workshop on Challenges in Geospatial Analysis, Integration and Visualization II in Stuttgart. Weblink (accessed March 2020): http://mediatum.ub.tum.de/doc/1145769/703861.pdf
[Kolb2009]Kolbe, T. H. (2009): Representing and Exchanging 3D City Models with CityGML. In: Lee, J., Zlatanova, S. (eds.): Proceedings of the 3rd International Workshop on 3D Geo-Information 2008 in Seoul, South Korea. Lecture Notes in Geoinformation & Cartography, Springer Verlag, 2009. Weblink (accessed March 2020): http://mediatum.ub.tum.de/doc/1145752/947446.pdf
[KKNS2009]Kolbe, T. H.; König, G.; Nagel, C.; Stadler, A. (2009): 3D-Geo-Database for CityGML, Documentation Version 2.0.1, Institute for Geodesy and Geoinformation Science, TU Berlin. Weblink (accessed March 2020): http://www.3dcitydb.org/3dcitydb/fileadmin/downloaddata/3DCityDB-Documentation-v2_0.pdf
[Kund2013]Kunde, F. (2013): CityGML in PostGIS: portability, usage and performance analysis using the example of the 3D City Database of Berlin. (in german only) Master Thesis, University of Potsdam, Germany, Weblink (accessed March 2020): https://publishup.uni-potsdam.de/opus4-ubp/frontdoor/deliver/index/docId/6186/file/kunde_master.pdf
[LoMV1999]Lodi A., Martello S., Vigo D. (1999): The Touching Perimeter Algorithm: Heuristic and Metaheuristic Approaches for a Class of Two-Dimensional Bin Packing Problems. In: INFORMS J on Computing: pp. 345-357.
[LoMM2002]Lodi A., Martello S., Monaci M., (2002): Two-dimensional packing problems: A survey. In: European Journal of Operational Research, 141, issue 2, pp. 241-252.
[Murr2010]Murray, C. et al. (2010): Oracle ® Spatial Developer’s Guide 11g Release 2 (11.2), E11830-06, March 2010. Weblink (accessed March 2020): http://docs.oracle.com/cd/E18283_01/appdev.112/e11830.pdf
[NaSt2008]Nagel, C., Stadler, A. (2008): Die Oracle-Schnittstelle des Berliner 3D-Stadtmodells. In: Clemen, C. (Ed.): Entwicklerforum Geoinformationstechnik 2008, Shaker Verlag, Aachen, S. 197-221.
[PGKS2005]Plümer, L., Gröger, G., Kolbe, T. H., Schmittwilken, J., Stroh, V., Poth, A., Taddeo, U. (2005): 3D-Geodatenbank Berlin, Dokumentation V1.0 Institut für Kartographie und Geoinformation der Universität Bonn (IKG), lat/lon GmbH. Weblink (accessed March 2020): https://pdfslide.net/documents/3d-geodatenbank-berlin-3d-geodatenbank-berlin-dokumentation-v10-institut-fuer.html
[SNKK2009]Stadler, A., Nagel, C., König, G., Kolbe, T. H. (2009): Making interoperability persistent: A 3D geo database based on CityGML. In: Lee, J., Zlatanova, S. (eds.): Proceedings of the 3rd International Workshop on 3D Geo-Information 2008 in Seoul, South Korea. Lecture Notes in Geoinformation & Cartography, Springer Verlag, 2009. Weblink (accessed March 2020): http://mediatum.ub.tum.de/doc/1145748/781842.pdf
[Whit2009]Whiteside, A. (2009): Definition identifier URNs in OGC namespace, Version 1.3. Open Geospatial Consortium, OGC® Best Practices, Doc. No. 07-092r3, January 15th. Weblink (accessed March 2020): http://portal.opengeospatial.org/files/?artifact_id=30575
[Wils2008]Wilson, T. (2008): OGC® KML, OGC® Standard Version 2.2.0. Open Geospatial Consortium, Doc. No. 07-147r2, April 14th. Weblink (accessed March 2020): http://portal.opengeospatial.org/files/?artifact_id=27810
[Weis2015]Weisstein, E. W. (2015): Affine Transformation, Wolfram MathWorld, Weblink (accessed March 2020): http://mathworld.wolfram.com/AffineTransformation.html
[YSKK2012]Yao, Z., Sindram, M., Kaden, R., Kolbe, T. H. (2014): Cloud-basierter 3D-Webclient zur kollaborativen Planung energetischer Maßnahmen am Beispiel von Berlin und London. In: Kolbe, Bill, Donaubauer (eds.): Geoinformationssysteme 2014 – Beiträge zur 1. Münchner GI-Runde, 24.-25. 2. 2014, Wichmann Verlag, Berlin. Weblink (accessed March 2020): http://mediatum.ub.tum.de/doc/1276243/359202.pdf
[YaCK2016]Yao, Z., Chaturvedi, K., Kolbe, T. H. (2016): Browserbasierte Visualisierung großer 3D-Stadtmodelle durch Erweiterung des Cesium Web Globe. In: Kolbe, T. H., Bill, R., Donaubauer, A. (eds.): Geoinformationssysteme 2016 – Beiträge zur 3. Münchner GI-Runde, 24.-25. 2. 2016, Wichmann Verlag, Berlin. Weblink (accessed March 2020): http://mediatum.ub.tum.de/doc/1296408/547142.pdf
[YaKo2017]Yao, Z., Kolbe, T. H. (2017): Dynamically Extending Spatial Databases to support CityGML Application Domain Extensions using Graph Transformations. In: Kersten, T.P. (ed.): Beitrag zur 37. Wissenschaftlich-Technische Jahrestagung der DGPF. Deutsche Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation e.V. Weblink (accessed March 2020): http://mediatum.ub.tum.de/doc/1425154/602735.pdf
[YNKH2018]Yao, Z., Nagel, C., Kunde, F., Hudra, G., Willkomm, P., Donaubauer, A., Adolphi, T., Kolbe, T. H. (2018): 3DCityDB - a 3D geodatabase solution for the management, analysis, and visualization of semantic 3D city models based on CityGML. Open Geospatial Data, Software and Standards 3 (5), 2018, 1-26. Weblink (accessed March 2020): http://dx.doi.org/10.1186/s40965-018-0046-7

Changelog

The changelog of the individual 3DCityDB software tool can be found on the Github project websites listed below.

3D City Database
Importer-Exporter
Web Feature Service
3DCityDB-Web-Map-Client