Back to top

Hyrax-1.16

Submitted by ndp on Mon, 07/01/2019 - 15:15

Hyrax by OPeNDAP logo

Hyrax-1.16.4 (updated 23 September 2021)


New Features

Added in Hyrax-1.16.4

NGAP & DMR++ Improvements

  • Trusted CMR
    • Modifying things into shape to use http::url instead of std::string.
    • Replaced strings with http::url object.
    • Moved AllowedHosts to http.
    • Fixed implementations of http::url::is_expired().
    • Switch RemoteSource constructor to shared_ptr.
    • Changed the way that http::url interprets no protocol urls/
    • Fixed concurrency issues in EffectiveUrlCache.
  • Corrected usage statement for get_dmrpp.
  • Handle the "missing data" files in the NGAP system.
  • Update NgapApiTest to reflect changes in CMR holdings.
  • Dropped useless call to Chunk.inflate() and added a state check to protect usfrom a CONTIGUOUS variable that is marked as compressed.
  • Rewrote read_contiguous() with std::async() and std::future dropping the SuperChunk idea.
  • First implementation of the new restified path with two mandatory and one optional path components.
  • OLFS
    • NGAP Updates:
      • Fixed gradle dependancies according snyk scan.
      • Session serialization 
      • Added session manager jars to ngap resources.
      • Updated production rules to include session manager code in the ngap application build.
      • Made history_json output as JSONArray.
      • New landing page for NGAP service.
      • Added UNIX time to OLFS logs in NGAP.
    • Bug Fixes:
      • Fixed bug in computation of dimension sizes in the dap4 IFH xsl.
      • Dropped DSR response for just the dataset_url.
      • Fixed broken json error output.
      • Changed the broken naming pattern for the PPT strings.
      • Enabled ChunkedInputStream debugging and cleaned up the messages in support of stream pollution problem.
      • Fixed broken css and deployment context links construction in DSR HTML page.
    • Performance Improvements:
      • Added file size and last modified headers to flat file transfer.
      • Added auth.log independent of debug.log.
      • First pass at patching POST request handling.

DAP4

  • DAP4 doesn't support DAP2 Grid. The code that handles the DAP2 Grid coordinates will cause some DAP4 coordinate variables under different groups to be ignored. So this fix ensures the code will NOT to call the code that handles the DAP2 Grid coordinates for the DAP4 case.

General

  • Added GitHub Actions to bes.
  • Stop parser EffectiveUrl resolution activity.
  • Added throttle to BESUtil::file_to_stream().
  • Ensure the data value correctness for the classic model.
    • When encountering a mismatch for the data type mapping mismatches,an error will be generated.
    • For the classic model, ensure the _fillvalue datatype to be the same as the variable datatype.
  • Server handler refactor.
  • Fixing duplicate CF history entries.
  • Perform comprehensive check of datatype match and implementation of ensuring _FillValue attribute type the same as the variable type.
  • Added new implementation of temp file transfer code for fileout_netcdf.
  • Added config param Http.UserAgent.
  • Fixed netCDF-4 and compression information is missing when A DAP2 grid maps to three netCDF variables.
  • Adds call to the ftruncate() function in the update cache files activity, unit tests for string replace_all().

fonc_handler

  • Added support for streaming netCDF3 files that do not contain Structures.
  • Fix a small memory leak in the history attribute code at the transmitter.
  • Add history attribute is added to dap4.
  • Add NC.PromoteByteToShort=true in the configuration file. This makes it consistent with nc.conf.in.
  • Ensures the value of signed 8-bit integer to be correctly represented in DAP2.
  • Remove unused getAttrType function from FONcArray.cc.
  • Dropping throttle from Fonc_transmiter

hdf5_handler

CF option

  • The DMR response can be directly generated rather than from DDS and DAS. 
    • To take advantage of this feature, the H5.EnableCFDMR key needs to be set to true in the configuration file (h5.conf).
    • The biggest advantage of this implementation is that the signed 8-bit integer mapping is kept.  The DMR generation from DDS and DAS maps the signed 8-bit integer to 16-bit integer because of the limitation of the DAP2 data model.
  • Bug fix: Ensure the path inside the coordinates attribute for TROPOMI AI product to be flattened.
  • Update the handling of escaping special characters due to NASA request.   The '\' and '"' are no longer escaped.

Added in Hyrax-1.16.3

NGAP & DMR++ Improvements

  • The dmr++ production chain: get_dmrpp, build_dmrpp, check_dmrpp, merge_dmrpp,  and reduce_mdf received the following updates
    • Support for injecting configuration modifications to allow fine tuning of the dataset representation in the produced dmr++ file.
    • Support for HDF5 COMPACT layout data.
    • Optional creation and injection of missing (domain coordinate) data as needed.
    • Endian information carried in Chunks
    • Int64 support
    • Updated command line options and help page.
  • Improved S3 reliability by adding retry efforts for common S3 error responses that indicate a retry is worth pursuing (because S3 just fails sometimes and a retry is suggested).
  • Improved and more transparent error handling for remote access issues.
  • Migrated the service implementation from making parallel requests using multi-cURL to the c++11 std:async and std:future mechanism.
  • Added caching of S3 “effective” URLs obtained from NGAP service chain.
  • Implemented support for EDL token chaining
  • New implementation of ngap restified path parser that is (almost) impervious to the the key value content in the path.
  • Implemented the SuperChunk optimization for mass acquisition of required, consecutive chunks.

hdf5_handler

Default option

  • Significantly enhanced the handling of netCDF-4 like HDF5 files for the DAP4(DMR) response
    • Correctly handle the pure netCDF-4 dimensions.
    • Remove the pre-defined netCDF-4 attributes.
    • Updated the testsuite that tests NASA DAP4 response.

CF option

  • ICESat-2 ATL03 and ATL08 support: 
    • Add the group path and make the variable names follow the CF naming conventions for "coordinates" attributes of ATL03-like variables.
  • Ensure the unsupported objects are removed in the DMR response.
  • Added support for the new GPM DPR level 3 version product. 
     

hdf4_handler

CF option

Enhance the support of handling HDF-EOS2 swath multiple dimension map pairs. 

  • The enhancement includes the support of multiple swaths. 
  • This fix solves the MOD09/MYD09 issue docoumented in HFRHANDLER-33
  • Added a BES key to turn off the handling of HDF-EOS2 swath dimension map.

Limitations

  1. The latitude and longitude must be held in 2 dimensional arrays.
  2. The number of dimension maps must be an even number in a swath.
  3. The handling of MODIS level 1B remains unchanged.
  4. When there is a one pair of dimension maps in a swath and the geo-dimensions defined in the dimension maps are only used by 2-D Latitude and Longitude fields, we utilize the old way.

Variable/dimension name conventions

  • The HDF-EOS2 file contains only one swath.
    • The swath name is not included in the variable names.
    • For latitude and longitude, the interpolated latitude and longitude variable names are named as "Latitude_1","Latitude_2","Longitude_1","Longitude_2".
    • The dimension and other variable names are just modified by following the CF conventions. 
    • A DDS example can be found at https://github.com/OPENDAP/hdf4_handler/testsuite/h4.nasa.with_hdfeos2/MYD09.dds.bescmd.baseline
       
  • The HDF-EOS2 file contains multiple swaths
    • The swath name are included in the variable and dimension names to avoid name clashings. 
    • The swath names are added as suffix for variable and dimension names.
    • Examples are like: "temperature_swath1","Latitude_swath1","Latitude_swath1_1" etc.
    •  A DDS example can be found at https://github.com/OPENDAP/hdf4_handler/bes-testsuite/h4.with_hdfeos2/swath_3_3d_dimmap.dds.bescmd.baseline
       
  • For applications that don't want to handle dimension maps, one can change the BES key "H4.DisableSwathDimMap=false" at h4.conf.in to "H4.DisableSwathDimMap= true".

BALTO & Dataset Search

  • Updated JSON-LD content of the server’s Data Request Form pages so that it is (once again) in keeping with the (evolving) rules enforced by the Google’s Dataset Search

DAP4

  • AsciiTransmit now supports DAP4 functions
  • Group support in fileout netcdf-4

General

  • End Of Life for CentOS-6 Support - It’s been a long road CentOS-6, but NASA has given us the OK to drop support for you just days before your nominal end of life. On to the next std::future.
  • Dropped the “longest matching” Whitelist configuration key in favor of a multiple regular expressions configuration using the new AllowedHosts key.
  • Consolidation of internal HTTP code and caching for all services. This means more consistent behavior and error handling everywhere the server has to reach out for something.
  • Introduced log message types: request, error, info, verbose, and timing which all log to BES.LogName/. Each type is identified in the log and has a “fixed” format that can be reliably parsed by downstream software. 
  • SonarCloud and /or Snyk is now a blocking step for all Hyrax component PRs
  • Our Docker images have been updated to utilize ncWMS-2.4.2 which is compatible with current Tomcat security measures. This means ncWMS2 is working again…
  • Dynamic Configuration - This feature is currently a proof-of-concept idea and is disabled with a compiler macro. To become an actual feature it will need to be implemented in a much more thoughtful and efficient manner. Which we will be happy to do so if there is sufficient interest!

Added in Hyrax-1.16.2

Hyrax In The Cloud

Hyrax is currently deployed and running as a fault-tolerant, scalable, highly available service in the NASA NGAP 2.0 production system using AWS CloudFront scripts deployed using Bamboo. The NASA/NGAP system is a subset of Amazon's AWS cloud system and the same architecture could be deployed outside of NGAP with minimal effort. Thanks to Doug Newman and the NGAP team for this work. 

  • Fault tolerant: The Hyrax deployment uses multiple instances and Amazon's ALB to distribute load across multiple instances. If one instance fails, others can take on the load.
  • Scalable:An aAuto-scaling group will adjust to load if necessary. 
  • Highly available: Failing instances will be detected and replaced. Deployments will be 'blue green' and thus not cause a service outage.
  • Hyrax can generate signed S3 requests when processing dmr++ files whose data content live in S3 when the correct credentials are provided (injected) into the server.
  • Hyrax can use Earthdata Login to authenticate users for data access.

Hyrax Regression Tests

The Hyrax regression tests have been moved out of the OLFS and into their own project. With this change comes new capabilities:

  • The regression tests can be run against any Hyrax server instance with the default (packaged) data available.
  • The target Hyrax instance can be running at any endpoint URL, as long as it is specified at runtime.
  • The regressiontests can be made to authenticate, if needed, by specifying a netrc file at runtime.
  • These regression tests will soon become part of the Hyrax continuous integration process.

DMR++ Development

Kent Yang of the HDFGroup has been developing code to resolve problems encountered by dmr++ representations when the underlying data do not contain domain coordinate variables. Hyrax can synthesize these variables at runtime, and Kent has been applying these techniques to the dmr++ generation. Stay tuned for more on this front as we integrate the results into the automated dmr++ production.

Added in Hyrax-1.16.1

Added (alpha) support support for S3 authentication credentials

  • Hyrax can access data in a protected Amazon Web Services S3 bucket using credentials provided by a pair of environment variables.
  • For situations where Hyrax needs to use several sets of credentials with S3, it now supports storing credentials in a configuration file. Credential sets are associated with URL prefixes, making the configuration easy.

Improved Server Logging

  • The server can now be set to add the OLFS log content to the BES log file, simplifying configuration and problem diagnosis.

NASA Earthdata Login User Authentication Support

  • Hyrax support NASA's Earthdata Login system, which is based largely on the OAuth2 protocol. If you need to stage a server behind OAuth2, you may be able to use/extend/modify the implementation in Hyrax.

Data Request Form and Catalogs

  • The Data Request Form now offers a configuration parameter to control if users must choose individual variables before getting data. For some sites, it makes sense to enable users to access all the data in a dataset with one click, while for other sites this is not appropriate. Now you can configure this behavior using <RequireUserSelection /> in the olfs.xml file.
  • You can now disable the automatic generation of THREDDS catalog files, and have Hyrax use catalogs you provide, instead. If <NoDynamicNavigation/> is uncommented in the olfs.xml file, then all of the dynamically generated catalog/navigation pages will be disabled. The server admin must either supply and maintain THREDDS catalogs, or provide their own system of navigation and discovery to generate links to the dataset endpoints. Note that this new option does not disable the Data Request Form.

Performance Improvements

  • Reduced the time to first byte for users by eliminating the unnecessary construction of metadata objects for the data response.

Added in Hyrax-1.16.0

Dataset Search Engines

Datasets served by Hyrax now provide information Google and other search engines need to make these data findable. All dataset landing pages and catalog navigation (contents.html) pages now contain embedded json-ld which crawlers such as Google Dataset Search, NSF's GeoCODES, and other data sensitive web crawlers use for indexing datasets. In order to facilitate this, certain steps can be taken by the server administrator to bring the Hyrax service to Google (and other) crawlers attention. Find more about Hyrax and JSON-LD here. Our work on JSON-LD support was funded by NSF Grant #1740704.

Serving Data From S3

Hyrax 1.16 has prototype support for subset-in-place of HDF5 and NetCDF4 data files that are stored on AWS S3. See the preliminary documentation in GitHub.

The new support includes software that can configure data already stored in S3 and still on spinning disk so that it can be served (and subset) in-place from S3 without reformatting the original data files. Support for other web object stores besides S3 has also been demonstrated.

This work on serving data from S3 was supported NASA, Raytheon, and The HDF Group.

Experimental support for STARE Indexing

We have added experimental support for STARE (Spatio Temporal Adaptive-Resolution Encoding). STARE provides a way for locations on the Earth to be denoted using a single integer number instead of the conventional Latitude and Longitude notation and provides rapid intercomparisons for finding co-located data. Our work on STARE indexing was supported by NASA Grant 17-ACCESS17-0039.


Bug Fixes

For Hyrax-1.16.3:

Many bugs were fixed, and a lot of effort was put into continuous integration and testing. Rather than itemizing the tickets here, if you want you can see all of the tickets we processed here

We also worked extensively with NASA on their NGAP project. We processed 94 tickets during the release period. If you have access to NASA's JIRA you can see the details here

For Hyrax-1.16.2:

Many bugs were fixed, and a lot of effort was put into continuous integration and testing. Rather than itemizing the tickets here, if you want you can see all of the tickets we processed here

We also worked extensively with NASA on their NGAP project. We processed 71 tickets during the release period. If you have access to NASA's JIRA you can see the details here

 

 

For Hyrax-1.16.1: The following issues have been fixed:

  • HK-272 - MDS bug - LMT of data not used
  • HK-361 - More performant handling of contiguous data for the DMR++ handler
  • HK-376 - Have Travis add the source distribution tar balls to the S3 bucket.
  • HK-411 - Fix the --baselines feature of the libdap DMRTest
  • HK-413 - Persistent leaks in the libxml2-based NCML parser.
  • HK-404 - Address operational and efficiency issues in the MDS
  • HK-426 - Form interface bug - Structures do not work correctly - two issues
  • HK-439 - bes source release
  • HK-444 - Build initial version of ncdmr.cc that can read the fnoc1.nc and build a DMR.
  • HK-445 - Modify the simple ncdmr.cc code so that it includes the attributes.
  • HK-446 - Modify the ncdmr.cc code so that it correctly recognizes shared dimensions in fnoc1.nc and coads_climatology.nc
  • HK-447 - Modify the ncdmr.cc code so that it can work with netCDF4 files that use groups.
  • HK-448 - Modify the ncdmr.cc code so that it can work with netCDF4 files that contain structures.
  • HK-449 - Integrate the ncdmr.cc code into the netCDF handler so that it is used for the DMR response.
  • HK-454 - the dmrpp_module is unable to build a dmr++ for the test file data/dmrpp/grid_1_2d.h5
  • HK-456 - Install the BES RPM package built from a PR and start the BES from that install. Check for failure.
  • HK-457 - The class BESRegex is utilized in a way that is incompatible with the underlying implementation. FIX
  • HK-458 - Web interface bug for Structures and Sequences
  • HK-459 - When Hyrax 1.16 runs, we see some error messages "leaking out of stderr"
  • HK-472 - BESInternalError Exception thrown by the NcML handler not handled properly
  • HK-473 - Implement combined olfs/bes log.
  • HK-474 - BES 3.20.5 memory errors
  • HK-485 - Modify the CI/CD process to make the docker image
  • HK-492 - Review the Travis activities for olfs, bes, and libdap
  • HK-537 - Reported problem in fileout_netcdf associated with _FillValue in Ocean Color dataset
  • HK-574 - Memory leak in AWSV4

For Hyrax-1.16.0: The following issues have been fixed:

  • Issues and Improvements with the CovJSON response were contributed by Corey Hemphill.
  • NetCDF file responses were not compressed when thy should have been. Fix by Aron Bartle at mechdyne.com.
  • HDF5 handler: CF option: Fixed a small memory leak when handling the OCO2 Lite product. Fix by Kent Yang at The HDF Group
  • HK-22 - The max_response_size limit is not working. Why? Fix! 
  • HK-23 - Fileout netCDF cannot generate a valid netCDF file when string datatype has a _FillValue 
  • HK-128 - FreeForm: Added regex pattern matching for format application. 
  • HK-311 - When running the httpd_catalog _tests_ I get intermittent errors on the first test.
  • HK-327 - Add response size logging to Hyrax.
  • HK-338 - In the remote THREDDS catalog presentation pages and dataset detail URL links contain spurious "/"(as "//") characters.
  • HK-351 - Gmljp2 output seems empty/broken.
  • HK-352 - fileout geotiff doesn't work for NCEP dataset.
  • HK-354 - Rewrite the Hyrax-Guide so that the OLFS configuration section reflects current situation.
  • HK-357 - Add tests for C++-11 support.
  • HK-360 - Improvement of Time Aggregation when using the DMR++ software.
  • HK-364 - Adopt New HDF5 library API for chunk info in the DMR++ handler.
  • HK-365 - Document how to serve data from S3.
  • HK-366 - Reanimate the BesCatalogCache (as BesNodeCache) but without worker threads.
  • HK-369 - Fix IFH for variable names containing things like "-" or "." which breaks the java script.
  • HK-372 - WCS fails to implement the needful atomic types. FIX.
  • HK-375 - Create SiteMap cache file to improve response site map navigation speeds for large holdings.
  • HK-387 - The httpd catalog is not showing content from the IRIS data server.
  • HK-388 - DMRs built (by libdap) fail to correctly XML encode attribute values and this breaks things.
  • HK-389 - fileout_netcdf not making compressed files when it should.
  • HK-398 - Error found by the Google JSON-LD checker in JSON-LD added for BALTO.
  • HK-403 - Memory leak in ncml_handler when accessing data from aggregated dmr++ files.
  • HK-407 - Improve the dmrpp parser.
  • HK-409 - Further GDAL tests: local netCDF tests.
  • HK-410 - D4ParserSax2 removes newline chars from element content.
  • HK-417 - Debug the httpd_catalog for IRIS on balto.o.o.
  • HK-421 - Catch up on sonar cloud issues in OLFS now the that the CI scanner is working.

Hyrax Software Downloads

Hyrax is open-source and so is available as compiled binaries and source code. We also produce Docker images of Hyrax and it's components.

Binary Packages

  binaries  Download Binaries for CentOS-7

  binaries  Install Binaries

Docker Images (About the Docker build process)

  Docker ​ ​Hyrax - The complete Hyrax service in a single Docker image.

  Docker ​ Hyrax with ncWMS2 - The Hyrax service bundled with ncWMS2 in a single Docker image

  Docker besd - The BES daemon in a single Docker image, typically used with Docker compose and the olfs image.

  Docker ​ olfs - The OLFS (and Tomcat) in a single Docker image, typically used with Docker compose and the besd image

Source Code

Continuous Integration

Required External Dependencies

In order to run Hyrax 1.16, you will need:

  • Java 1.7 or greater
  • Tomcat 7.x or 8.x
  • Linux (We provide RPMs for CentOS-7.x; install them with yum), Ubuntu, OS-X or another suitable Unix OS.

Binaries for Hyrax 1.16.4

Software Components for Hyrax 1.16.4

To run the Hyrax server, download and install the following (from source or binary):

  • OLFS (Java 1.8+)
  • libdap
  • BES
  • ncWMS2 (optional)

Java icon  OLFS (Java-1.7)

Java icon  ncWMS2 (Java-1.7) (optional)

Linux Tux Logo BES

Linux (CentOS 7.x) x86_64 RPMs

All of the CentOS-7 RPMs we build, including the devel and debuginfo packages

  • libdap-3.20.8-1 (gpg signature) - The libdap library RPM for this release.
     
  • bes-3.20.9-6.static (gpg signature) - This RPM includes statically linked copies of all of the modules/handlers we support, including HDF4 & 5 with HDFEOS support. There is no need to install packages from EPEL with this RPM. Other sources of RPM packages will likely provide a bes RPM that uses handlers linked (dynamically) to dependencies from their distributions (CentOS, Fedora, etc.). Note: the bes.conf file has important changes in support of JSON-LD. Make sure to look at /etc/bes/bes.conf.rpmnew after you insta/upgrade the BES with these RPMs.

CentOS-6

CentOS-6 has reached end of life and we are no longer supporting it.

Installing the binary distribution

BES Installation

  • Download the RPM packages found (see above) for your target operating system.
  • Use yum to install the libdap and bes RPMs:
    sudo yum install libdap-3.20.*.rpm bes-3.20.*.rpm
    (Unless you're going to be building software from source for Hyrax, skip the *-devel and *-debuginfo RPMs.)
  • Look at the /etc/bes/bes.conf.rpmnew file. Localize and merge the new BES.ServerAdministrator information into your bes.conf file. Note the format of the new BES.ServerAdministrator entries as it has changed from the previous version.
  • At this point you can test the BES by typing the following into a terminal:
    • start it:
        sudo service besd start
    • connect using a simple client:
        bescmdln
    • and get version information:
        show version;
    • exit from bescmdln:
        exit

BES Notes - If you are upgrading from an existing installation older than 1.13.0

  • In the bes.conf file the keys BES.CacheDir, BES.CacheSize, and BES.CachePrefix have been replaced with BES.UncompressCache.dir, BES.UncompressCache.size, and BES.UncompressCache.prefix respectively. Other changes include the gateway cache configuration (gateway.conf) which now uses the keys Gateway.Cache.dir, Gateway.Cache.size, and Gateway.Cache.prefix to configure its cache. Changing the names enabled the BES to use separate parameters for each of its several caches, which fixes the problem of 'cache collisions.'

OLFS and Starting the Server

CentOS 7, modern Ubuntu/Debian systems:

Install tomcat (sudo yum install tomcat)

  • Make the directory /etc/olfs and ensure tomcat can write to it. (sudo mkdir /etc/olfs; chgrp tomcat /etc/olfs; chmod g+w /etc/olfs)
  • Unpack the opendap.war web archive file from olfs-1.18.1-webapp.tgz (tar -xzf olfs-1.18.1-webapp.tgz)
  • Install the opendap.war file (sudo cp opendap.war /usr/share/tomcat/webaps)
    NOTEOn the current CentOS-7 default SELinux rules will now prohibit Tomcat from reading the war file :(
    This can be remediated by issuing the following two commands as the super user
    :
    • sudo semanage fcontext -a -t tomcat_var_lib_t \
           /var/lib/tomcat/webapps/opendap.war
    • sudo restorecon -rv /var/lib/tomcat/webapps/
  • Start tomcat: 
    • sudo service tomcat start

CentOS-6

  • CentOS-6 has reached end of life and we are no longer supporting it.

Test the server:

  • Test the server:
    • In a web browser, use http://localhost:8080/opendap/
    • Look at sample data files shipped with the server

Notes:

  • If you are installing the OLFS in conjunction with ncWMS2 version 2.0 or higher: Copy both the opendap.war and the ncWMS2.war files into the Tomcat webapps directory. (Re)Start Tomcat. Go read about, and then configure ncWMS2 and the OLFS to work together.
  • From here, or if you are having problems, see our new Hyrax Manual and the older Hyrax documentation page
  • ATTENTION - If you are upgrading Hyrax from any previous installation older than 1.15, read this!
    The internal format of the olfs.xml file has been revised. No previous version off this file will work with Hyrax-1.15. In order to upgrade your system, move your old configuration directory aside (ex: mv /etc/olfs ~/olfs-OLD) and then follow the instruction to install a new OLFS. Once you have it installed and running you will need to review your old configuration and make the appropriate changes to the new olfs.xml to restore your server's behavior. The other OLFS configuration files have not undergone any structural changes and you may simply replace the new ones that were installed with copies of your previously working ones.
  • To make the server restart when the host boots, use systemctl enable besd and systemctl enable tomcat or chkconfig besd on and chkconfig tomcat on depending on specifics of your Linux distribution

Source code for Hyrax 1.16.4

Source from GitHub

Snapshot builds

Snapshot builds from the Continuous Integration and Delivery (CI/CD) system are available in Docker images.

See our Docker Hub page for the latest "snapshot" CI/CD build of the server.