Back to top


Submitted by ndp on Mon, 07/01/2019 - 15:15

Hyrax by OPeNDAP logo

Hyrax-1.16.8 (updated 25 July 2022)

New Features

Log4j Vulnerability Statement: Hyrax does not use Log4j and does not ship with any Log4j library jar files.

Added in Hyrax-1.16.8

Our Docker containers have migrated to Rocky-8 as the base operating system

Improved the error messages returned to the client 

Improved the DAP4 Data Request Form (now the default form for the server)


Added in Hyrax-1.16.7

Support for RHEL8

With the release of Hyrax-1.16.7 we are happy to announce that a complete set of Enterprise Linux 8 (el8) binaries are available.
See the binaries section below.

  • Patched bug in introduced by the std::vector refactor
  • Added time.h header to ppt/


Added in Hyrax-1.16.6

See the binaries section below.

DMR++ Improvements

  • Various improvements for supporting FillValue in dmr++ lifecycle.
  • Improved support for arrays of type String.
  • Fixed trusted url bug in DMZ parser.
  • Added support for "empty" valued scalars with associated _FillValue metadata.
  • get_dmrpp Improvements
  • Added support for S3 hosted granules to get_dmrpp


  • Support for RHEL8 
  • Refactored get_dmrpp, application. Some test features are still broken but core functionality is working now.
  • Improved support for more GES DISC level 3 and level 4 products 
  • Added coverage the support for the AIRS level 3 and GLDAS level products.
  • Modified fileout_netcdf handler to allow netcdf-3 responses to be up to 4GB in size. This behavior can be reverted by setting FONc.NC3ClassicFormat=true in the BES configuration (aka /etc/bes/site.conf file)


  • Patched bug where unexpected Authentication headers would trigger a redirect loop.
  • Fixed the broken service that was failing to deliver flat (not-data) files to clients.
  • Made the "Get As NetCDF-3" an "Get As DAP2 Binary" buttons on the DAP4 Data Request Form context sensitive. If the dataset in question contains variables whose data types are found in DAP4 and not in DAP2/NetCDF-3 then the buttons are disabled. A more complete solution is envisioned where the projected variables are assessed and if only DAP2/NetCDF-3 types are selected then the buttons would would be enabled. This fix is only a step in that more dynamic direction.
  • Tested on Tomcat-9.0.x

Added in Hyrax-1.16.5

DMR++ Improvements

  • Added support for the HDF5 filter Fletcher32 to the dmr++ creation and processing code.
  • Implemented lazy evaluation of dmr++ files. This change greatly  improves efficiency/speed for requests that subset a dataset that contains a large number of variables as only the variables requested will have their Chunk information read and parsed.
  • Added version and configuration information to dmr++ files built using the build_dmrpp and get_dmrpp applications. This will enable people to recreate and understand the conditions which resulted in a particular dmr++ instance. This also includes a -z switch for get_dmrpp which will return its version.
  • Performance improvement: By patching Chunk::add_tracking_query_param() so that it doesn't do anything if no parameter is submitted we eliminated a very time costly regular expression evaluation that was being performed during the read operation for every Chunk. This improved read performance by a factor of 2-3 orders of magnitude!

BES Handler Updates

  • Added new netcdf_handler configuration parameter: NC.PromoteByteToShort  Which, when set to true will cause signed 8-bit integer values to be promoted to Int16 (Because the DAP2 data model does not support signed 8-bit integers)
  • By default the NetCDF Fileout feature will ship with FONc.ClassicModel=false
  • Added new configuration option to the hdf5_handler, EnableCFDMR=true which will allow the generation of CF compliant DMR output.

OLFS Configuration

In order to make the server's behavior more understandable to users, and its configuration more understandable to admins we have changed the way that the server responds to client requests for the unadorned Dataset URL and the way that the server generates Data Request Form links in its catalog pages. There are new configuration parameters to control these behaviors.

DEPRECATED <UseDAP2ResourceUrlResponse />

This configuration parameter has been deprecated - Use the <DatasetUrlResponse /> and <DataRequestForm /> elements to configure the same behavior (see below).

<DatasetUrlResponse type="..." />

The DatasetUrlResponse element is used to configure the type of response that the server will generate when a client attempts to access the unadorned Dataset URL. The type of response is controlled by the value of the type attribute.

Allowed Values:

  • dsr - The dap4 DSR response will be returned for the dataset URL. **Note**: _This setting is not compatible with DataRequestForm type of "dap2" as the DSR response URL collides with the DAP2 Data Request Form URL.
  • download - If the configuration parameter AllowDirectDataSourceAccess is set (present) then the source data file will be returned for the dataset URL. If the configuration parameter AllowDirectDataSourceAccess is not present then a 403 forbidden will be returned for the dataset URL. (This is basically a file retrieval service, any constraint expression submitted with the unadorned dataset URL will be ignored.)
  • requestForm - The Hyrax Data Request Form page will be returned for the dataset URL. Which form is returned is controlled by the  DataRequestForm configuration element.

Default: download
<DatasetUrlResponse type="download"/>

<DataRequestForm type="..." />

The DataRequestForm element defines the target DAP data model for the dataset links in the "blue-bar" catalog.html pages. These links point to the DAP Data Request Form for each dataset. This element also determines the type of Data Request Form page returned when the DatasetUrlResponse type="requestForm" and the request is for the Dataset URL. Allowed type values are: dap2 and dap4.

Default: dap4
<DataRequestForm type="dap4" />

<AllowDirectDataSourceAccess />

When enabled users will be able to use Hyrax as a file server and download the underlying data files/granules/objects directly, without utilizing the DAP APIs.

Default: disabled
<!--AllowDirectDataSourceAccess / -->

<ForceDataRequestFormLinkToHttps />

The presence of this element will cause the Data Request Form interfaces to "force" the dataset URL to HTTPS. This is useful for situations where the sever is sitting behind a connection management tool (like AWS CloudFront) whose outward facing connections are HTTPS but Hyrax is not using HTTPS. Thus the internal URLs being received by Hyrax are on HTTP. When these URLs are exposed via the Data Request Forms they can cause some client's to have issues with session dropping because the protocols are not consistent.

Default: disabled
<!-- ForceDataRequestFormLinkToHttps / -->

OLFS Behavior

Log Sanitization - Log entries for User-Agent, URL path, and query string are now scrubbed. Previously only the query string was scrubbed.

DAP2 Data Request Form - Dropped link to in DAP2 Data Request Form to the no longer supported BES
generated request form.

OLFS Java Dependency Library Updates

Updated the dependency libraries as follows:

  • Upgraded gson-2.3.1 to gson-2.8.9
  • Upgraded slf4j-1.7.16 to slf4j-1.7.32
  • Upgraded logback-core-1.1.11 to logback-core-1.2.9
  • Upgraded logback-classic-1.2.0 to logback-classic-1.2.9

Added in Hyrax-1.16.4

NGAP & DMR++ Improvements

  • Trusted CMR
    • Modifying things into shape to use http::url instead of std::string.
    • Replaced strings with http::url object.
    • Moved AllowedHosts to http.
    • Fixed implementations of http::url::is_expired().
    • Switch RemoteSource constructor to shared_ptr.
    • Changed the way that http::url interprets no protocol urls/
    • Fixed concurrency issues in EffectiveUrlCache.
  • Corrected usage statement for get_dmrpp.
  • Handle the "missing data" files in the NGAP system.
  • Update NgapApiTest to reflect changes in CMR holdings.
  • Dropped useless call to Chunk.inflate() and added a state check to protect usfrom a CONTIGUOUS variable that is marked as compressed.
  • Rewrote read_contiguous() with std::async() and std::future dropping the SuperChunk idea.
  • First implementation of the new restified path with two mandatory and one optional path components.
  • OLFS
    • NGAP Updates:
      • Fixed gradle dependancies according snyk scan.
      • Session serialization 
      • Added session manager jars to ngap resources.
      • Updated production rules to include session manager code in the ngap application build.
      • Made history_json output as JSONArray.
      • New landing page for NGAP service.
      • Added UNIX time to OLFS logs in NGAP.
    • Bug Fixes:
      • Fixed bug in computation of dimension sizes in the dap4 IFH xsl.
      • Dropped DSR response for just the dataset_url.
      • Fixed broken json error output.
      • Changed the broken naming pattern for the PPT strings.
      • Enabled ChunkedInputStream debugging and cleaned up the messages in support of stream pollution problem.
      • Fixed broken css and deployment context links construction in DSR HTML page.
    • Performance Improvements:
      • Added file size and last modified headers to flat file transfer.
      • Added auth.log independent of debug.log.
      • First pass at patching POST request handling.


  • DAP4 doesn't support DAP2 Grid. The code that handles the DAP2 Grid coordinates will cause some DAP4 coordinate variables under different groups to be ignored. So this fix ensures the code will NOT to call the code that handles the DAP2 Grid coordinates for the DAP4 case.


  • Added GitHub Actions to bes.
  • Stop parser EffectiveUrl resolution activity.
  • Added throttle to BESUtil::file_to_stream().
  • Ensure the data value correctness for the classic model.
    • When encountering a mismatch for the data type mapping mismatches,an error will be generated.
    • For the classic model, ensure the _fillvalue datatype to be the same as the variable datatype.
  • Server handler refactor.
  • Fixing duplicate CF history entries.
  • Perform comprehensive check of datatype match and implementation of ensuring _FillValue attribute type the same as the variable type.
  • Added new implementation of temp file transfer code for fileout_netcdf.
  • Added config param Http.UserAgent.
  • Fixed netCDF-4 and compression information is missing when A DAP2 grid maps to three netCDF variables.
  • Adds call to the ftruncate() function in the update cache files activity, unit tests for string replace_all().


  • Added support for streaming netCDF3 files that do not contain Structures.
  • Fix a small memory leak in the history attribute code at the transmitter.
  • Add history attribute is added to dap4.
  • Add NC.PromoteByteToShort=true in the configuration file. This makes it consistent with
  • Ensures the value of signed 8-bit integer to be correctly represented in DAP2.
  • Remove unused getAttrType function from
  • Dropping throttle from Fonc_transmiter


CF option

  • The DMR response can be directly generated rather than from DDS and DAS. 
    • To take advantage of this feature, the H5.EnableCFDMR key needs to be set to true in the configuration file (h5.conf).
    • The biggest advantage of this implementation is that the signed 8-bit integer mapping is kept.  The DMR generation from DDS and DAS maps the signed 8-bit integer to 16-bit integer because of the limitation of the DAP2 data model.
  • Bug fix: Ensure the path inside the coordinates attribute for TROPOMI AI product to be flattened.
  • Update the handling of escaping special characters due to NASA request.   The '\' and '"' are no longer escaped.

Added in Hyrax-1.16.3

NGAP & DMR++ Improvements

  • The dmr++ production chain: get_dmrpp, build_dmrpp, check_dmrpp, merge_dmrpp,  and reduce_mdf received the following updates
    • Support for injecting configuration modifications to allow fine tuning of the dataset representation in the produced dmr++ file.
    • Support for HDF5 COMPACT layout data.
    • Optional creation and injection of missing (domain coordinate) data as needed.
    • Endian information carried in Chunks
    • Int64 support
    • Updated command line options and help page.
  • Improved S3 reliability by adding retry efforts for common S3 error responses that indicate a retry is worth pursuing (because S3 just fails sometimes and a retry is suggested).
  • Improved and more transparent error handling for remote access issues.
  • Migrated the service implementation from making parallel requests using multi-cURL to the c++11 std:async and std:future mechanism.
  • Added caching of S3 “effective” URLs obtained from NGAP service chain.
  • Implemented support for EDL token chaining
  • New implementation of ngap restified path parser that is (almost) impervious to the the key value content in the path.
  • Implemented the SuperChunk optimization for mass acquisition of required, consecutive chunks.


Default option

  • Significantly enhanced the handling of netCDF-4 like HDF5 files for the DAP4(DMR) response
    • Correctly handle the pure netCDF-4 dimensions.
    • Remove the pre-defined netCDF-4 attributes.
    • Updated the testsuite that tests NASA DAP4 response.

CF option

  • ICESat-2 ATL03 and ATL08 support: 
    • Add the group path and make the variable names follow the CF naming conventions for "coordinates" attributes of ATL03-like variables.
  • Ensure the unsupported objects are removed in the DMR response.
  • Added support for the new GPM DPR level 3 version product. 


CF option

Enhance the support of handling HDF-EOS2 swath multiple dimension map pairs. 

  • The enhancement includes the support of multiple swaths. 
  • This fix solves the MOD09/MYD09 issue docoumented in HFRHANDLER-33
  • Added a BES key to turn off the handling of HDF-EOS2 swath dimension map.


  1. The latitude and longitude must be held in 2 dimensional arrays.
  2. The number of dimension maps must be an even number in a swath.
  3. The handling of MODIS level 1B remains unchanged.
  4. When there is a one pair of dimension maps in a swath and the geo-dimensions defined in the dimension maps are only used by 2-D Latitude and Longitude fields, we utilize the old way.

Variable/dimension name conventions

  • The HDF-EOS2 file contains only one swath.
    • The swath name is not included in the variable names.
    • For latitude and longitude, the interpolated latitude and longitude variable names are named as "Latitude_1","Latitude_2","Longitude_1","Longitude_2".
    • The dimension and other variable names are just modified by following the CF conventions. 
    • A DDS example can be found at
  • The HDF-EOS2 file contains multiple swaths
    • The swath name are included in the variable and dimension names to avoid name clashings. 
    • The swath names are added as suffix for variable and dimension names.
    • Examples are like: "temperature_swath1","Latitude_swath1","Latitude_swath1_1" etc.
    •  A DDS example can be found at
  • For applications that don't want to handle dimension maps, one can change the BES key "H4.DisableSwathDimMap=false" at to "H4.DisableSwathDimMap= true".

BALTO & Dataset Search

  • Updated JSON-LD content of the server’s Data Request Form pages so that it is (once again) in keeping with the (evolving) rules enforced by the Google’s Dataset Search


  • AsciiTransmit now supports DAP4 functions
  • Group support in fileout netcdf-4


  • End Of Life for CentOS-6 Support - It’s been a long road CentOS-6, but NASA has given us the OK to drop support for you just days before your nominal end of life. On to the next std::future.
  • Dropped the “longest matching” Whitelist configuration key in favor of a multiple regular expressions configuration using the new AllowedHosts key.
  • Consolidation of internal HTTP code and caching for all services. This means more consistent behavior and error handling everywhere the server has to reach out for something.
  • Introduced log message types: request, error, info, verbose, and timing which all log to BES.LogName/. Each type is identified in the log and has a “fixed” format that can be reliably parsed by downstream software. 
  • SonarCloud and /or Snyk is now a blocking step for all Hyrax component PRs
  • Our Docker images have been updated to utilize ncWMS-2.4.2 which is compatible with current Tomcat security measures. This means ncWMS2 is working again…
  • Dynamic Configuration - This feature is currently a proof-of-concept idea and is disabled with a compiler macro. To become an actual feature it will need to be implemented in a much more thoughtful and efficient manner. Which we will be happy to do so if there is sufficient interest!

Added in Hyrax-1.16.2

Hyrax In The Cloud

Hyrax is currently deployed and running as a fault-tolerant, scalable, highly available service in the NASA NGAP 2.0 production system using AWS CloudFront scripts deployed using Bamboo. The NASA/NGAP system is a subset of Amazon's AWS cloud system and the same architecture could be deployed outside of NGAP with minimal effort. Thanks to Doug Newman and the NGAP team for this work. 

  • Fault tolerant: The Hyrax deployment uses multiple instances and Amazon's ALB to distribute load across multiple instances. If one instance fails, others can take on the load.
  • Scalable:An aAuto-scaling group will adjust to load if necessary. 
  • Highly available: Failing instances will be detected and replaced. Deployments will be 'blue green' and thus not cause a service outage.
  • Hyrax can generate signed S3 requests when processing dmr++ files whose data content live in S3 when the correct credentials are provided (injected) into the server.
  • Hyrax can use Earthdata Login to authenticate users for data access.

Hyrax Regression Tests

The Hyrax regression tests have been moved out of the OLFS and into their own project. With this change comes new capabilities:

  • The regression tests can be run against any Hyrax server instance with the default (packaged) data available.
  • The target Hyrax instance can be running at any endpoint URL, as long as it is specified at runtime.
  • The regressiontests can be made to authenticate, if needed, by specifying a netrc file at runtime.
  • These regression tests will soon become part of the Hyrax continuous integration process.

DMR++ Development

Kent Yang of the HDFGroup has been developing code to resolve problems encountered by dmr++ representations when the underlying data do not contain domain coordinate variables. Hyrax can synthesize these variables at runtime, and Kent has been applying these techniques to the dmr++ generation. Stay tuned for more on this front as we integrate the results into the automated dmr++ production.

Added in Hyrax-1.16.1

Added (alpha) support support for S3 authentication credentials

  • Hyrax can access data in a protected Amazon Web Services S3 bucket using credentials provided by a pair of environment variables.
  • For situations where Hyrax needs to use several sets of credentials with S3, it now supports storing credentials in a configuration file. Credential sets are associated with URL prefixes, making the configuration easy.

Improved Server Logging

  • The server can now be set to add the OLFS log content to the BES log file, simplifying configuration and problem diagnosis.

NASA Earthdata Login User Authentication Support

  • Hyrax support NASA's Earthdata Login system, which is based largely on the OAuth2 protocol. If you need to stage a server behind OAuth2, you may be able to use/extend/modify the implementation in Hyrax.

Data Request Form and Catalogs

  • The Data Request Form now offers a configuration parameter to control if users must choose individual variables before getting data. For some sites, it makes sense to enable users to access all the data in a dataset with one click, while for other sites this is not appropriate. Now you can configure this behavior using <RequireUserSelection /> in the olfs.xml file.
  • You can now disable the automatic generation of THREDDS catalog files, and have Hyrax use catalogs you provide, instead. If <NoDynamicNavigation/> is uncommented in the olfs.xml file, then all of the dynamically generated catalog/navigation pages will be disabled. The server admin must either supply and maintain THREDDS catalogs, or provide their own system of navigation and discovery to generate links to the dataset endpoints. Note that this new option does not disable the Data Request Form.

Performance Improvements

  • Reduced the time to first byte for users by eliminating the unnecessary construction of metadata objects for the data response.

Added in Hyrax-1.16.0

Dataset Search Engines

Datasets served by Hyrax now provide information Google and other search engines need to make these data findable. All dataset landing pages and catalog navigation (contents.html) pages now contain embedded json-ld which crawlers such as Google Dataset Search, NSF's GeoCODES, and other data sensitive web crawlers use for indexing datasets. In order to facilitate this, certain steps can be taken by the server administrator to bring the Hyrax service to Google (and other) crawlers attention. Find more about Hyrax and JSON-LD here. Our work on JSON-LD support was funded by NSF Grant #1740704.

Serving Data From S3

Hyrax 1.16 has prototype support for subset-in-place of HDF5 and NetCDF4 data files that are stored on AWS S3. See the preliminary documentation in GitHub.

The new support includes software that can configure data already stored in S3 and still on spinning disk so that it can be served (and subset) in-place from S3 without reformatting the original data files. Support for other web object stores besides S3 has also been demonstrated.

This work on serving data from S3 was supported NASA, Raytheon, and The HDF Group.

Experimental support for STARE Indexing

We have added experimental support for STARE (Spatio Temporal Adaptive-Resolution Encoding). STARE provides a way for locations on the Earth to be denoted using a single integer number instead of the conventional Latitude and Longitude notation and provides rapid intercomparisons for finding co-located data. Our work on STARE indexing was supported by NASA Grant 17-ACCESS17-0039.

Bugs/Issues Addressed

For Hyrax-1.16.8:


  • Added Attributes with DAP4 types to the "No NetCDF-3 Downloads For You"  feature in the DAP4 Data Request Form.
  • Updated our Docker ncWMS to version 2.5.2
  • Migrated to Java-11
  • Fixed dap4Contents.xsl so that the viewers link works.
  • Migrated the annotations from the node_contents.xsl to dap4Contents.xsl
  • Rewrote links to https in the catalog markup in contents.html
  • Removed commented out code blocks.
  • Fixed contents.html page formatting with new css type


  • Improved error messages for The Response IsToo Big feature in fileout_netcdf, and in The Response Took Too Much Time to marshall.


  • Fixed bug in computation of request_size_kb()
  • Fixed type issue in (#192)

We worked extensively with NASA on their NGAP project. If you have access to NASA's JIRA you can see the details here. Otherwise, you'll need to read the Hyrax-1.16.8 narrative above and you can review the GitHub releases for the associated projects for more information:

For Hyrax-1.16.6:


  • Added code to detect client protcol change (HYRAX-141). Updated ReqInfo.getRequestUrlPath() so that it utilizes the request headers:
    • CloudFront-Forwarded-Proto
    • X-Forwarded-Proto
    • X-Forwarded-Port
      when reconstructing the "request url" dereferenced by the initiating client. This means the if some upstream entity rewrites the URL (Like CloudFront does when it drops the https protocol in favor of http for internal access) it can be detected and the links built by the server and returned to the client are now correct.
  • Refactored project so that all of the code that depends on the gdal library is in a single module, modules/gdal_module.
  • Retired use of auto_ptr.
  • Refactored timeout implementation and dropped the use of SIGALRM therein.
  • Added regression test suite for get_dmrpp
  • Continued general migration to C++11 coding norms.
  • Added Map elements to DMR
  • Modify the HDF5 handler so that DAP4 Coverages have Map elements. This support extends to a number of DAAC "specials"  like HDF-EOS5


  • Fix for bugs in the ce parser around inverted indices. 
  • Fix for libdap4 github issue 147:  Grid::get_map_iter() was off by one
  • Improvements to DAP4 api.
  • Fixed various memory leaks.
  • Replaced instances of &vector[0] with  (RHEL8)

We worked extensively with NASA on their NGAP project. We processed 87 tickets during the release period. If you have access to NASA's JIRA you can see the details here. Otherwise, you'll need to read the Hyrax-1.16.6 narrative above and you can review the GitHub releases for the associated projects for more information:

For Hyrax-1.16.5:

We worked extensively with NASA on their NGAP project. We processed 33 tickets during the release period. If you have access to NASA's JIRA you can see the details here. Otherwise, you'll need to read the Hyrax-1.16.5 narrative above and you can review the GitHub releases for the associated projects for more information:

For Hyrax-1.16.4:


For Hyrax-1.16.3:

Many bugs were fixed, and a lot of effort was put into continuous integration and testing. Rather than itemizing the tickets here, if you want you can see all of the tickets we processed here

We also worked extensively with NASA on their NGAP project. We processed 94 tickets during the release period. If you have access to NASA's JIRA you can see the details here

For Hyrax-1.16.2:

Many bugs were fixed, and a lot of effort was put into continuous integration and testing. Rather than itemizing the tickets here, if you want you can see all of the tickets we processed here

We also worked extensively with NASA on their NGAP project. We processed 71 tickets during the release period. If you have access to NASA's JIRA you can see the details here

For Hyrax-1.16.1: The following issues have been fixed:

  • HK-272 - MDS bug - LMT of data not used
  • HK-361 - More performant handling of contiguous data for the DMR++ handler
  • HK-376 - Have Travis add the source distribution tar balls to the S3 bucket.
  • HK-411 - Fix the --baselines feature of the libdap DMRTest
  • HK-413 - Persistent leaks in the libxml2-based NCML parser.
  • HK-404 - Address operational and efficiency issues in the MDS
  • HK-426 - Form interface bug - Structures do not work correctly - two issues
  • HK-439 - bes source release
  • HK-444 - Build initial version of that can read the and build a DMR.
  • HK-445 - Modify the simple code so that it includes the attributes.
  • HK-446 - Modify the code so that it correctly recognizes shared dimensions in and
  • HK-447 - Modify the code so that it can work with netCDF4 files that use groups.
  • HK-448 - Modify the code so that it can work with netCDF4 files that contain structures.
  • HK-449 - Integrate the code into the netCDF handler so that it is used for the DMR response.
  • HK-454 - the dmrpp_module is unable to build a dmr++ for the test file data/dmrpp/grid_1_2d.h5
  • HK-456 - Install the BES RPM package built from a PR and start the BES from that install. Check for failure.
  • HK-457 - The class BESRegex is utilized in a way that is incompatible with the underlying implementation. FIX
  • HK-458 - Web interface bug for Structures and Sequences
  • HK-459 - When Hyrax 1.16 runs, we see some error messages "leaking out of stderr"
  • HK-472 - BESInternalError Exception thrown by the NcML handler not handled properly
  • HK-473 - Implement combined olfs/bes log.
  • HK-474 - BES 3.20.5 memory errors
  • HK-485 - Modify the CI/CD process to make the docker image
  • HK-492 - Review the Travis activities for olfs, bes, and libdap
  • HK-537 - Reported problem in fileout_netcdf associated with _FillValue in Ocean Color dataset
  • HK-574 - Memory leak in AWSV4

For Hyrax-1.16.0: The following issues have been fixed:

  • Issues and Improvements with the CovJSON response were contributed by Corey Hemphill.
  • NetCDF file responses were not compressed when thy should have been. Fix by Aron Bartle at
  • HDF5 handler: CF option: Fixed a small memory leak when handling the OCO2 Lite product. Fix by Kent Yang at The HDF Group
  • HK-22 - The max_response_size limit is not working. Why? Fix! 
  • HK-23 - Fileout netCDF cannot generate a valid netCDF file when string datatype has a _FillValue 
  • HK-128 - FreeForm: Added regex pattern matching for format application. 
  • HK-311 - When running the httpd_catalog _tests_ I get intermittent errors on the first test.
  • HK-327 - Add response size logging to Hyrax.
  • HK-338 - In the remote THREDDS catalog presentation pages and dataset detail URL links contain spurious "/"(as "//") characters.
  • HK-351 - Gmljp2 output seems empty/broken.
  • HK-352 - fileout geotiff doesn't work for NCEP dataset.
  • HK-354 - Rewrite the Hyrax-Guide so that the OLFS configuration section reflects current situation.
  • HK-357 - Add tests for C++-11 support.
  • HK-360 - Improvement of Time Aggregation when using the DMR++ software.
  • HK-364 - Adopt New HDF5 library API for chunk info in the DMR++ handler.
  • HK-365 - Document how to serve data from S3.
  • HK-366 - Reanimate the BesCatalogCache (as BesNodeCache) but without worker threads.
  • HK-369 - Fix IFH for variable names containing things like "-" or "." which breaks the java script.
  • HK-372 - WCS fails to implement the needful atomic types. FIX.
  • HK-375 - Create SiteMap cache file to improve response site map navigation speeds for large holdings.
  • HK-387 - The httpd catalog is not showing content from the IRIS data server.
  • HK-388 - DMRs built (by libdap) fail to correctly XML encode attribute values and this breaks things.
  • HK-389 - fileout_netcdf not making compressed files when it should.
  • HK-398 - Error found by the Google JSON-LD checker in JSON-LD added for BALTO.
  • HK-403 - Memory leak in ncml_handler when accessing data from aggregated dmr++ files.
  • HK-407 - Improve the dmrpp parser.
  • HK-409 - Further GDAL tests: local netCDF tests.
  • HK-410 - D4ParserSax2 removes newline chars from element content.
  • HK-417 - Debug the httpd_catalog for IRIS on balto.o.o.
  • HK-421 - Catch up on sonar cloud issues in OLFS now the that the CI scanner is working.

Hyrax Software Downloads

Hyrax is open-source and so is available as compiled binaries and source code. We also produce Docker images of Hyrax and it's components.

Binary Packages

  binaries  Download Binaries for Enterprise Linux 8

  binaries  Download Binaries for CentOS-7

  binaries  Install Binaries

Docker Images are delayed and will be available as soon as possible. ndp-4/06/2022

Docker Images (About the Docker build process)

  Docker ​ ​Hyrax - The complete Hyrax service in a single Docker image.

  Docker ​ Hyrax with ncWMS2 - The Hyrax service bundled with ncWMS2 in a single Docker image

  Docker besd - The BES daemon in a single Docker image, typically used with Docker compose and the olfs image.

  Docker ​ olfs - The OLFS (and Tomcat) in a single Docker image, typically used with Docker compose and the besd image

Source Code

Continuous Integration

Required External Dependencies

In order to run Hyrax 1.16, you will need:

  • Java 1.7 or greater
  • Tomcat 7.x or 8.x
  • Linux (We provide RPMs for RHEL-8 and CentOS-7.x; install them with dnf/yum), Ubuntu, OS-X or another suitable Unix OS.

Binaries for Hyrax 1.16.8

Software Components

To run the Hyrax server, download and install the following (from source or binary):

  • OLFS (Java 1.11+)
  • libdap
  • BES
  • ncWMS2 (optional)

Java icon  OLFS (Java-1.8)

Java icon  ncWMS2 (Java-1.8) (optional)

Linux Tux Logo BES


Beginning with Hyrax-1.16.7 our Enterprise Linux builds will be release on el8 (aka rhel-8)

Linux (RHEL-8) x86_64 RPMs  All of the RHEL-8 RPMs we build, including the devel and debuginfo packages

  • libdap-3.20.11-0 (gpg signature) - The libdap library RPM for this release.
  • bes-3.20.13-0.static (gpg signature) - This RPM includes statically linked copies of all of the modules/handlers we support, including HDF4 & 5 with HDFEOS support. There is no need to install packages from EPEL with this RPM. Other sources of RPM packages will likely provide a bes RPM that uses handlers linked (dynamically) to dependencies from their distributions (CentOS, Fedora, etc.). Note: the bes.conf file has important changes in support of JSON-LD. Make sure to look at /etc/bes/bes.conf.rpmnew after you insta/upgrade the BES with these RPMs.


CentOS-7 has reached end of life, Hyrax-1.16.7 is the last release for which we will be supplying CentOS-7 (aka el7) rpm binaries.

Linux (CentOS 7.x) x86_64 RPMs

All of the CentOS-7 RPMs we build, including the devel and debuginfo packages

  • libdap-3.20.11-0 (gpg signature) - The libdap library RPM for this release.
  • bes-3.20.13-0.static (gpg signature) - This RPM includes statically linked copies of all of the modules/handlers we support, including HDF4 & 5 with HDFEOS support. There is no need to install packages from EPEL with this RPM. Other sources of RPM packages will likely provide a bes RPM that uses handlers linked (dynamically) to dependencies from their distributions (CentOS, Fedora, etc.). Note: the bes.conf file has important changes in support of JSON-LD. Make sure to look at /etc/bes/bes.conf.rpmnew after you insta/upgrade the BES with these RPMs.

Installing the binary distribution

BES Installation

  • Download the RPM packages found (see above) for your target operating system.
  • Use yum to install the libdap and bes RPMs:
    sudo yum install libdap-3.20.*.rpm bes-3.20.*.rpm
    (Unless you're going to be building software from source for Hyrax, skip the *-devel and *-debuginfo RPMs.)
  • Look at the /etc/bes/bes.conf.rpmnew file. Localize and merge the new BES.ServerAdministrator information into your bes.conf file. Note the format of the new BES.ServerAdministrator entries as it has changed from the previous version.
  • At this point you can test the BES by typing the following into a terminal:
    • start it:
        sudo service besd start
    • connect using a simple client:
    • and get version information:
        show version;
    • exit from bescmdln:

Installing the OLFS and Starting the Server

Enterprise Linux 8

In el8 the Apache Tomcat application has been removed from yum/dnf. Thus one will need to go to the Apache Tomcat site and retrieve that latest release.

Please note that Hyrax-1.16.8 was tested using Tomcat-9.0.64

  • Install Apache Tomcat (for this example it's in /usr/share/tomcat)
  • Make the directory /etc/olfs and ensure tomcat can write to it. (sudo mkdir /etc/olfs; chgrp tomcat /etc/olfs; chmod g+w /etc/olfs)
  • Install the opendap.war file (sudo cp opendap.war /usr/share/tomcat/webaps)
  • Start Tomcat (The bes should be running already)


CentOS 7, modern Ubuntu/Debian systems:

Install tomcat (sudo yum install tomcat)

  • Make the directory /etc/olfs and ensure tomcat can write to it. (sudo mkdir /etc/olfs; chgrp tomcat /etc/olfs; chmod g+w /etc/olfs)
  • Unpack the opendap.war web archive file from olfs-1.18.1-webapp.tgz (tar -xzf olfs-1.18.1-webapp.tgz)
  • Install the opendap.war file (sudo cp opendap.war /usr/share/tomcat/webaps)
    NOTEOn the current CentOS-7 default SELinux rules will now prohibit Tomcat from reading the war file :(
    This can be remediated by issuing the following two commands as the super user
    • sudo semanage fcontext -a -t tomcat_var_lib_t \
    • sudo restorecon -rv /var/lib/tomcat/webapps/
  • Start tomcat: 
    • sudo service tomcat start

Test the server:

  • Test the server:
    • In a web browser, use http://localhost:8080/opendap/
    • Look at sample data files shipped with the server


  • If you are installing the OLFS in conjunction with ncWMS2 version 2.0 or higher: Copy both the opendap.war and the ncWMS2.war files into the Tomcat webapps directory. (Re)Start Tomcat. Go read about, and then configure ncWMS2 and the OLFS to work together.
  • From here, or if you are having problems, see our new Hyrax Manual and the older Hyrax documentation page
  • ATTENTION - If you are upgrading Hyrax from any previous installation older than 1.16.5, read this!
    The internal format of the olfs.xml file has been revised. No previous version off this file will work with Hyrax-1.16.5. In order to upgrade your system, move your old configuration directory aside (ex: mv /etc/olfs ~/olfs-OLD) and then follow the instruction to install a new OLFS. Once you have it installed and running you will need to review your old configuration and make the appropriate changes to the new olfs.xml to restore your server's behavior. The other OLFS configuration files have not undergone any structural changes and you may simply replace the new ones that were installed with copies of your previously working ones.
  • To make the server restart when the host boots, use systemctl enable besd and systemctl enable tomcat or chkconfig besd on and chkconfig tomcat on depending on specifics of your Linux distribution

Source code for Hyrax 1.16.8

Source from GitHub

Snapshot builds

Snapshot builds from the Continuous Integration and Delivery (CI/CD) system are available in Docker images.

See our Docker Hub page for the latest "snapshot" CI/CD build of the server.