Hyrax-1.16.5 (updated 04 January 2022)
- Release Information
- Hyrax Software Downloads
- Hyrax Server Documentation
Log4j Vulnerability Statement: Hyrax does not use Log4j and does not ship with any Log4j library jar files.
- Added support for the HDF5 filter Fletcher32 to the dmr++ creation and processing code.
- Implemented lazy evaluation of dmr++ files. This change greatly improves efficiency/speed for requests that subset a dataset that contains a large number of variables as only the variables requested will have their Chunk information read and parsed.
- Added version and configuration information to dmr++ files built using the build_dmrpp and get_dmrpp applications. This will enable people to recreate and understand the conditions which resulted in a particular dmr++ instance. This also includes a -z switch for get_dmrpp which will return its version.
- Performance improvement: By patching Chunk::add_tracking_query_param() so that it doesn't do anything if no parameter is submitted we eliminated a very time costly regular expression evaluation that was being performed during the read operation for every Chunk. This improved read performance by a factor of 2-3 orders of magnitude!
BES Handler Updates
- Added new netcdf_handler configuration parameter: NC.PromoteByteToShort Which, when set to true will cause signed 8-bit integer values to be promoted to Int16 (Because the DAP2 data model does not support signed 8-bit integers)
- By default the NetCDF Fileout feature will ship with FONc.ClassicModel=false
- Added new configuration option to the hdf5_handler, EnableCFDMR=true which will allow the generation of CF compliant DMR output.
In order to make the server's behavior more understandable to users, and its configuration more understandable to admins we have changed the way that the server responds to client requests for the unadorned Dataset URL and the way that the server generates Data Request Form links in its catalog pages. There are new configuration parameters to control these behaviors.
DEPRECATED <UseDAP2ResourceUrlResponse />
This configuration parameter has been deprecated - Use the <DatasetUrlResponse /> and <DataRequestForm /> elements to configure the same behavior (see below).
<DatasetUrlResponse type="..." />
The DatasetUrlResponse element is used to configure the type of response that the server will generate when a client attempts to access the unadorned Dataset URL. The type of response is controlled by the value of the type attribute.
- dsr - The dap4 DSR response will be returned for the dataset URL. **Note**: _This setting is not compatible with DataRequestForm type of "dap2" as the DSR response URL collides with the DAP2 Data Request Form URL.
- download - If the configuration parameter AllowDirectDataSourceAccess is set (present) then the source data file will be returned for the dataset URL. If the configuration parameter AllowDirectDataSourceAccess is not present then a 403 forbidden will be returned for the dataset URL. (This is basically a file retrieval service, any constraint expression submitted with the unadorned dataset URL will be ignored.)
- requestForm - The Hyrax Data Request Form page will be returned for the dataset URL. Which form is returned is controlled by the DataRequestForm configuration element.
<DataRequestForm type="..." />
The DataRequestForm element defines the target DAP data model for the dataset links in the "blue-bar" catalog.html pages. These links point to the DAP Data Request Form for each dataset. This element also determines the type of Data Request Form page returned when the DatasetUrlResponse type="requestForm" and the request is for the Dataset URL. Allowed type values are: dap2 and dap4.
<DataRequestForm type="dap4" />
When enabled users will be able to use Hyrax as a file server and download the underlying data files/granules/objects directly, without utilizing the DAP APIs.
<!--AllowDirectDataSourceAccess / -->
The presence of this element will cause the Data Request Form interfaces to "force" the dataset URL to HTTPS. This is useful for situations where the sever is sitting behind a connection management tool (like AWS CloudFront) whose outward facing connections are HTTPS but Hyrax is not using HTTPS. Thus the internal URLs being received by Hyrax are on HTTP. When these URLs are exposed via the Data Request Forms they can cause some client's to have issues with session dropping because the protocols are not consistent.
<!-- ForceDataRequestFormLinkToHttps / -->
Log Sanitization - Log entries for User-Agent, URL path, and query string are now scrubbed. Previously only the query string was scrubbed.
DAP2 Data Request Form - Dropped link to in DAP2 Data Request Form to the no longer supported BES
generated request form.
OLFS Java Dependency Library Updates
Updated the dependency libraries as follows:
- Upgraded gson-2.3.1 to gson-2.8.9
- Upgraded slf4j-1.7.16 to slf4j-1.7.32
- Upgraded logback-core-1.1.11 to logback-core-1.2.9
- Upgraded logback-classic-1.2.0 to logback-classic-1.2.9
NGAP & DMR++ Improvements
- Trusted CMR
- Modifying things into shape to use http::url instead of std::string.
- Replaced strings with http::url object.
- Moved AllowedHosts to http.
- Fixed implementations of http::url::is_expired().
- Switch RemoteSource constructor to shared_ptr.
- Changed the way that http::url interprets no protocol urls/
- Fixed concurrency issues in EffectiveUrlCache.
- Corrected usage statement for get_dmrpp.
- Handle the "missing data" files in the NGAP system.
- Update NgapApiTest to reflect changes in CMR holdings.
- Dropped useless call to Chunk.inflate() and added a state check to protect usfrom a CONTIGUOUS variable that is marked as compressed.
- Rewrote read_contiguous() with std::async() and std::future dropping the SuperChunk idea.
- First implementation of the new restified path with two mandatory and one optional path components.
- NGAP Updates:
- Fixed gradle dependancies according snyk scan.
- Session serialization
- Added session manager jars to ngap resources.
- Updated production rules to include session manager code in the ngap application build.
- Made history_json output as JSONArray.
- New landing page for NGAP service.
- Added UNIX time to OLFS logs in NGAP.
- Bug Fixes:
- Fixed bug in computation of dimension sizes in the dap4 IFH xsl.
- Dropped DSR response for just the dataset_url.
- Fixed broken json error output.
- Changed the broken naming pattern for the PPT strings.
- Enabled ChunkedInputStream debugging and cleaned up the messages in support of stream pollution problem.
- Fixed broken css and deployment context links construction in DSR HTML page.
- Performance Improvements:
- Added file size and last modified headers to flat file transfer.
- Added auth.log independent of debug.log.
- First pass at patching POST request handling.
- NGAP Updates:
- DAP4 doesn't support DAP2 Grid. The code that handles the DAP2 Grid coordinates will cause some DAP4 coordinate variables under different groups to be ignored. So this fix ensures the code will NOT to call the code that handles the DAP2 Grid coordinates for the DAP4 case.
- Added GitHub Actions to bes.
- Stop parser EffectiveUrl resolution activity.
- Added throttle to BESUtil::file_to_stream().
- Ensure the data value correctness for the classic model.
- When encountering a mismatch for the data type mapping mismatches,an error will be generated.
- For the classic model, ensure the _fillvalue datatype to be the same as the variable datatype.
- Server handler refactor.
- Fixing duplicate CF history entries.
- Perform comprehensive check of datatype match and implementation of ensuring _FillValue attribute type the same as the variable type.
- Added new implementation of temp file transfer code for fileout_netcdf.
- Added config param Http.UserAgent.
- Fixed netCDF-4 and compression information is missing when A DAP2 grid maps to three netCDF variables.
- Adds call to the ftruncate() function in the update cache files activity, unit tests for string replace_all().
- Added support for streaming netCDF3 files that do not contain Structures.
- Fix a small memory leak in the history attribute code at the transmitter.
- Add history attribute is added to dap4.
- Add NC.PromoteByteToShort=true in the configuration file. This makes it consistent with nc.conf.in.
- Ensures the value of signed 8-bit integer to be correctly represented in DAP2.
- Remove unused getAttrType function from FONcArray.cc.
- Dropping throttle from Fonc_transmiter
- The DMR response can be directly generated rather than from DDS and DAS.
- To take advantage of this feature, the H5.EnableCFDMR key needs to be set to true in the configuration file (h5.conf).
- The biggest advantage of this implementation is that the signed 8-bit integer mapping is kept. The DMR generation from DDS and DAS maps the signed 8-bit integer to 16-bit integer because of the limitation of the DAP2 data model.
- Bug fix: Ensure the path inside the coordinates attribute for TROPOMI AI product to be flattened.
- Update the handling of escaping special characters due to NASA request. The '\' and '"' are no longer escaped.
NGAP & DMR++ Improvements
- The dmr++ production chain: get_dmrpp, build_dmrpp, check_dmrpp, merge_dmrpp, and reduce_mdf received the following updates
- Support for injecting configuration modifications to allow fine tuning of the dataset representation in the produced dmr++ file.
- Support for HDF5 COMPACT layout data.
- Optional creation and injection of missing (domain coordinate) data as needed.
- Endian information carried in Chunks
- Int64 support
- Updated command line options and help page.
- Improved S3 reliability by adding retry efforts for common S3 error responses that indicate a retry is worth pursuing (because S3 just fails sometimes and a retry is suggested).
- Improved and more transparent error handling for remote access issues.
- Migrated the service implementation from making parallel requests using multi-cURL to the c++11 std:async and std:future mechanism.
- Added caching of S3 “effective” URLs obtained from NGAP service chain.
- Implemented support for EDL token chaining
- New implementation of ngap restified path parser that is (almost) impervious to the the key value content in the path.
- Implemented the SuperChunk optimization for mass acquisition of required, consecutive chunks.
- Significantly enhanced the handling of netCDF-4 like HDF5 files for the DAP4(DMR) response
- Correctly handle the pure netCDF-4 dimensions.
- Remove the pre-defined netCDF-4 attributes.
- Updated the testsuite that tests NASA DAP4 response.
- ICESat-2 ATL03 and ATL08 support:
- Add the group path and make the variable names follow the CF naming conventions for "coordinates" attributes of ATL03-like variables.
- Ensure the unsupported objects are removed in the DMR response.
- Added support for the new GPM DPR level 3 version product.
Enhance the support of handling HDF-EOS2 swath multiple dimension map pairs.
- The enhancement includes the support of multiple swaths.
- This fix solves the MOD09/MYD09 issue docoumented in HFRHANDLER-33
- Added a BES key to turn off the handling of HDF-EOS2 swath dimension map.
- The latitude and longitude must be held in 2 dimensional arrays.
- The number of dimension maps must be an even number in a swath.
- The handling of MODIS level 1B remains unchanged.
- When there is a one pair of dimension maps in a swath and the geo-dimensions defined in the dimension maps are only used by 2-D Latitude and Longitude fields, we utilize the old way.
Variable/dimension name conventions
- The HDF-EOS2 file contains only one swath.
- The swath name is not included in the variable names.
- For latitude and longitude, the interpolated latitude and longitude variable names are named as "Latitude_1","Latitude_2","Longitude_1","Longitude_2".
- The dimension and other variable names are just modified by following the CF conventions.
- A DDS example can be found at https://github.com/OPENDAP/hdf4_handler/testsuite/h4.nasa.with_hdfeos2/MYD09.dds.bescmd.baseline
- The HDF-EOS2 file contains multiple swaths
- The swath name are included in the variable and dimension names to avoid name clashings.
- The swath names are added as suffix for variable and dimension names.
- Examples are like: "temperature_swath1","Latitude_swath1","Latitude_swath1_1" etc.
- A DDS example can be found at https://github.com/OPENDAP/hdf4_handler/bes-testsuite/h4.with_hdfeos2/swath_3_3d_dimmap.dds.bescmd.baseline
- For applications that don't want to handle dimension maps, one can change the BES key "H4.DisableSwathDimMap=false" at h4.conf.in to "H4.DisableSwathDimMap= true".
BALTO & Dataset Search
Updated JSON-LD content of the server’s Data Request Form pages so that it is (once again) in keeping with the (evolving) rules enforced by the Google’s Dataset Search
- AsciiTransmit now supports DAP4 functions
- Group support in fileout netcdf-4
- End Of Life for CentOS-6 Support - It’s been a long road CentOS-6, but NASA has given us the OK to drop support for you just days before your nominal end of life. On to the next std::future.
- Dropped the “longest matching” Whitelist configuration key in favor of a multiple regular expressions configuration using the new AllowedHosts key.
- Consolidation of internal HTTP code and caching for all services. This means more consistent behavior and error handling everywhere the server has to reach out for something.
- Introduced log message types: request, error, info, verbose, and timing which all log to BES.LogName/. Each type is identified in the log and has a “fixed” format that can be reliably parsed by downstream software.
- SonarCloud and /or Snyk is now a blocking step for all Hyrax component PRs
- Our Docker images have been updated to utilize ncWMS-2.4.2 which is compatible with current Tomcat security measures. This means ncWMS2 is working again…
Dynamic Configuration- This feature is currently a proof-of-concept idea and is disabled with a compiler macro. To become an actual feature it will need to be implemented in a much more thoughtful and efficient manner. Which we will be happy to do so if there is sufficient interest!
Hyrax In The Cloud
Hyrax is currently deployed and running as a fault-tolerant, scalable, highly available service in the NASA NGAP 2.0 production system using AWS CloudFront scripts deployed using Bamboo. The NASA/NGAP system is a subset of Amazon's AWS cloud system and the same architecture could be deployed outside of NGAP with minimal effort. Thanks to Doug Newman and the NGAP team for this work.
- Fault tolerant: The Hyrax deployment uses multiple instances and Amazon's ALB to distribute load across multiple instances. If one instance fails, others can take on the load.
- Scalable:An aAuto-scaling group will adjust to load if necessary.
- Highly available: Failing instances will be detected and replaced. Deployments will be 'blue green' and thus not cause a service outage.
- Hyrax can generate signed S3 requests when processing dmr++ files whose data content live in S3 when the correct credentials are provided (injected) into the server.
- Hyrax can use Earthdata Login to authenticate users for data access.
Hyrax Regression Tests
The Hyrax regression tests have been moved out of the OLFS and into their own project. With this change comes new capabilities:
- The regression tests can be run against any Hyrax server instance with the default (packaged) data available.
- The target Hyrax instance can be running at any endpoint URL, as long as it is specified at runtime.
- The regressiontests can be made to authenticate, if needed, by specifying a netrc file at runtime.
- These regression tests will soon become part of the Hyrax continuous integration process.
Kent Yang of the HDFGroup has been developing code to resolve problems encountered by dmr++ representations when the underlying data do not contain domain coordinate variables. Hyrax can synthesize these variables at runtime, and Kent has been applying these techniques to the dmr++ generation. Stay tuned for more on this front as we integrate the results into the automated dmr++ production.
Added (alpha) support support for S3 authentication credentials
- Hyrax can access data in a protected Amazon Web Services S3 bucket using credentials provided by a pair of environment variables.
- For situations where Hyrax needs to use several sets of credentials with S3, it now supports storing credentials in a configuration file. Credential sets are associated with URL prefixes, making the configuration easy.
Improved Server Logging
- The server can now be set to add the OLFS log content to the BES log file, simplifying configuration and problem diagnosis.
NASA Earthdata Login User Authentication Support
- Hyrax support NASA's Earthdata Login system, which is based largely on the OAuth2 protocol. If you need to stage a server behind OAuth2, you may be able to use/extend/modify the implementation in Hyrax.
Data Request Form and Catalogs
- The Data Request Form now offers a configuration parameter to control if users must choose individual variables before getting data. For some sites, it makes sense to enable users to access all the data in a dataset with one click, while for other sites this is not appropriate. Now you can configure this behavior using <RequireUserSelection /> in the olfs.xml file.
- You can now disable the automatic generation of THREDDS catalog files, and have Hyrax use catalogs you provide, instead. If <NoDynamicNavigation/> is uncommented in the olfs.xml file, then all of the dynamically generated catalog/navigation pages will be disabled. The server admin must either supply and maintain THREDDS catalogs, or provide their own system of navigation and discovery to generate links to the dataset endpoints. Note that this new option does not disable the Data Request Form.
- Reduced the time to first byte for users by eliminating the unnecessary construction of metadata objects for the data response.
Dataset Search Engines
Datasets served by Hyrax now provide information Google and other search engines need to make these data findable. All dataset landing pages and catalog navigation (contents.html) pages now contain embedded json-ld which crawlers such as Google Dataset Search, NSF's GeoCODES, and other data sensitive web crawlers use for indexing datasets. In order to facilitate this, certain steps can be taken by the server administrator to bring the Hyrax service to Google (and other) crawlers attention. Find more about Hyrax and JSON-LD here. Our work on JSON-LD support was funded by NSF Grant #1740704.
Serving Data From S3
Hyrax 1.16 has prototype support for subset-in-place of HDF5 and NetCDF4 data files that are stored on AWS S3. See the preliminary documentation in GitHub.
The new support includes software that can configure data already stored in S3 and still on spinning disk so that it can be served (and subset) in-place from S3 without reformatting the original data files. Support for other web object stores besides S3 has also been demonstrated.
This work on serving data from S3 was supported NASA, Raytheon, and The HDF Group.
Experimental support for STARE Indexing
We have added experimental support for STARE (Spatio Temporal Adaptive-Resolution Encoding). STARE provides a way for locations on the Earth to be denoted using a single integer number instead of the conventional Latitude and Longitude notation and provides rapid intercomparisons for finding co-located data. Our work on STARE indexing was supported by NASA Grant 17-ACCESS17-0039.
We worked extensively with NASA on their NGAP project. We processed 33 tickets during the release period. If you have access to NASA's JIRA you can see the details here. Otherwise, you'll need to read the Hyrax-1.16.5 narrative above and you can review the GitHub releases for the associated projects for more information:
Many bugs were fixed, and a lot of effort was put into continuous integration and testing. Rather than itemizing the tickets here, if you want you can see all of the tickets we processed here.
We also worked extensively with NASA on their NGAP project. We processed 94 tickets during the release period. If you have access to NASA's JIRA you can see the details here
Many bugs were fixed, and a lot of effort was put into continuous integration and testing. Rather than itemizing the tickets here, if you want you can see all of the tickets we processed here.
We also worked extensively with NASA on their NGAP project. We processed 71 tickets during the release period. If you have access to NASA's JIRA you can see the details here
For Hyrax-1.16.1: The following issues have been fixed:
- HK-272 - MDS bug - LMT of data not used
- HK-361 - More performant handling of contiguous data for the DMR++ handler
- HK-376 - Have Travis add the source distribution tar balls to the S3 bucket.
- HK-411 - Fix the --baselines feature of the libdap DMRTest
- HK-413 - Persistent leaks in the libxml2-based NCML parser.
- HK-404 - Address operational and efficiency issues in the MDS
- HK-426 - Form interface bug - Structures do not work correctly - two issues
- HK-439 - bes source release
- HK-444 - Build initial version of ncdmr.cc that can read the fnoc1.nc and build a DMR.
- HK-445 - Modify the simple ncdmr.cc code so that it includes the attributes.
- HK-446 - Modify the ncdmr.cc code so that it correctly recognizes shared dimensions in fnoc1.nc and coads_climatology.nc
- HK-447 - Modify the ncdmr.cc code so that it can work with netCDF4 files that use groups.
- HK-448 - Modify the ncdmr.cc code so that it can work with netCDF4 files that contain structures.
- HK-449 - Integrate the ncdmr.cc code into the netCDF handler so that it is used for the DMR response.
- HK-454 - the dmrpp_module is unable to build a dmr++ for the test file data/dmrpp/grid_1_2d.h5
- HK-456 - Install the BES RPM package built from a PR and start the BES from that install. Check for failure.
- HK-457 - The class BESRegex is utilized in a way that is incompatible with the underlying implementation. FIX
- HK-458 - Web interface bug for Structures and Sequences
- HK-459 - When Hyrax 1.16 runs, we see some error messages "leaking out of stderr"
- HK-472 - BESInternalError Exception thrown by the NcML handler not handled properly
- HK-473 - Implement combined olfs/bes log.
- HK-474 - BES 3.20.5 memory errors
- HK-485 - Modify the CI/CD process to make the docker image
- HK-492 - Review the Travis activities for olfs, bes, and libdap
- HK-537 - Reported problem in fileout_netcdf associated with _FillValue in Ocean Color dataset
- HK-574 - Memory leak in AWSV4
For Hyrax-1.16.0: The following issues have been fixed:
- Issues and Improvements with the CovJSON response were contributed by Corey Hemphill.
- NetCDF file responses were not compressed when thy should have been. Fix by Aron Bartle at mechdyne.com.
- HDF5 handler: CF option: Fixed a small memory leak when handling the OCO2 Lite product. Fix by Kent Yang at The HDF Group
- HK-22 - The max_response_size limit is not working. Why? Fix!
- HK-23 - Fileout netCDF cannot generate a valid netCDF file when string datatype has a _FillValue
- HK-128 - FreeForm: Added regex pattern matching for format application.
- HK-311 - When running the httpd_catalog _tests_ I get intermittent errors on the first test.
- HK-327 - Add response size logging to Hyrax.
- HK-338 - In the remote THREDDS catalog presentation pages and dataset detail URL links contain spurious "/"(as "//") characters.
- HK-351 - Gmljp2 output seems empty/broken.
- HK-352 - fileout geotiff doesn't work for NCEP dataset.
- HK-354 - Rewrite the Hyrax-Guide so that the OLFS configuration section reflects current situation.
- HK-357 - Add tests for C++-11 support.
- HK-360 - Improvement of Time Aggregation when using the DMR++ software.
- HK-364 - Adopt New HDF5 library API for chunk info in the DMR++ handler.
- HK-365 - Document how to serve data from S3.
- HK-366 - Reanimate the BesCatalogCache (as BesNodeCache) but without worker threads.
- HK-369 - Fix IFH for variable names containing things like "-" or "." which breaks the java script.
- HK-372 - WCS fails to implement the needful atomic types. FIX.
- HK-375 - Create SiteMap cache file to improve response site map navigation speeds for large holdings.
- HK-387 - The httpd catalog is not showing content from the IRIS data server.
- HK-388 - DMRs built (by libdap) fail to correctly XML encode attribute values and this breaks things.
- HK-389 - fileout_netcdf not making compressed files when it should.
- HK-398 - Error found by the Google JSON-LD checker in JSON-LD added for BALTO.
- HK-403 - Memory leak in ncml_handler when accessing data from aggregated dmr++ files.
- HK-407 - Improve the dmrpp parser.
- HK-409 - Further GDAL tests: local netCDF tests.
- HK-410 - D4ParserSax2 removes newline chars from element content.
- HK-417 - Debug the httpd_catalog for IRIS on balto.o.o.
- HK-421 - Catch up on sonar cloud issues in OLFS now the that the CI scanner is working.
Hyrax is open-source and so is available as compiled binaries and source code. We also produce Docker images of Hyrax and it's components.
Docker Images (About the Docker build process)
Hyrax - The complete Hyrax service in a single Docker image.
Hyrax with ncWMS2 - The Hyrax service bundled with ncWMS2 in a single Docker image
besd - The BES daemon in a single Docker image, typically used with Docker compose and the olfs image.
olfs - The OLFS (and Tomcat) in a single Docker image, typically used with Docker compose and the besd image
- Hyrax-1.16.5 Source Code The source code from which the release was built.
- GitHub - All of our software is available on Github
- Snapshot docker images of every successful CI outcome are pushed to docker hub.
In order to run Hyrax 1.16, you will need:
- Java 1.7 or greater
- Tomcat 7.x or 8.x
- Linux (We provide RPMs for CentOS-7.x; install them with yum), Ubuntu, OS-X or another suitable Unix OS.
Software Components for Hyrax 1.16.5
To run the Hyrax server, download and install the following (from source or binary):
- OLFS (Java 1.8+)
- ncWMS2 (optional)
- OLFS 1.18.10 Web Archive File (gpg signature) Unpack using 'tar -xvf filename' and follow the instructions in the README file. (Requires Java 1.7 Built using Java 8 Tested against Tomcat 8.5.34).
- OLFS Automatic robots.txt generation for 1.18.10 (gpg_signature) This archive contains a web archive file that runs in the Tomcat server's root context that returns a response for '/robots.txt' so that your site can be crawled using the automatically-built site maps added in 1.15.2. This is beta software; we'd appreciate feedback on it.
ncWMS2 (Java-1.7) (optional)
- Use the EDAL web page to locate the latest ncWMS2 "Servlet Container" software bundle as a WAR file. Install it into the same Tomcat instance as the OLFS. The configuration instructions may be found here.
All of the CentOS-7 RPMs we build, including the devel and debuginfo packages
- libdap-3.20.9-0 (gpg signature) - The libdap library RPM for this release.
- bes-3.20.10-0.static (gpg signature) - This RPM includes statically linked copies of all of the modules/handlers we support, including HDF4 & 5 with HDFEOS support. There is no need to install packages from EPEL with this RPM. Other sources of RPM packages will likely provide a bes RPM that uses handlers linked (dynamically) to dependencies from their distributions (CentOS, Fedora, etc.). Note: the bes.conf file has important changes in support of JSON-LD. Make sure to look at /etc/bes/bes.conf.rpmnew after you insta/upgrade the BES with these RPMs.
CentOS-6 has reached end of life and we are no longer supporting it.
- Legacy Linux (CentOS 6.x) x86_64 RPMs - CentOS-6 RPMs for previous version of Hyrax.
- Download the RPM packages found (see above) for your target operating system.
- Use yum to install the libdap and bes RPMs:
sudo yum install libdap-3.20.*.rpm bes-3.20.*.rpm
(Unless you're going to be building software from source for Hyrax, skip the *-devel and *-debuginfo RPMs.)
- Look at the /etc/bes/bes.conf.rpmnew file. Localize and merge the new BES.ServerAdministrator information into your bes.conf file. Note the format of the new BES.ServerAdministrator entries as it has changed from the previous version.
- At this point you can test the BES by typing the following into a terminal:
- start it:
sudo service besd start
- connect using a simple client:
- and get version information:
- exit from bescmdln:
- start it:
BES Notes - If you are upgrading from an existing installation older than 1.13.0
- In the bes.conf file the keys BES.CacheDir, BES.CacheSize, and BES.CachePrefix have been replaced with BES.UncompressCache.dir, BES.UncompressCache.size, and BES.UncompressCache.prefix respectively. Other changes include the gateway cache configuration (gateway.conf) which now uses the keys Gateway.Cache.dir, Gateway.Cache.size, and Gateway.Cache.prefix to configure its cache. Changing the names enabled the BES to use separate parameters for each of its several caches, which fixes the problem of 'cache collisions.'
OLFS and Starting the Server
CentOS 7, modern Ubuntu/Debian systems:
Install tomcat (sudo yum install tomcat)
- Make the directory /etc/olfs and ensure tomcat can write to it. (sudo mkdir /etc/olfs; chgrp tomcat /etc/olfs; chmod g+w /etc/olfs)
- Unpack the opendap.war web archive file from olfs-1.18.1-webapp.tgz (tar -xzf olfs-1.18.1-webapp.tgz)
- Install the opendap.war file (sudo cp opendap.war /usr/share/tomcat/webaps)
NOTE: On the current CentOS-7 default SELinux rules will now prohibit Tomcat from reading the war file :(
This can be remediated by issuing the following two commands as the super user:
sudo semanage fcontext -a -t tomcat_var_lib_t \ /var/lib/tomcat/webapps/opendap.war
sudo restorecon -rv /var/lib/tomcat/webapps/
- Start tomcat:
sudo service tomcat start
- CentOS-6 has reached end of life and we are no longer supporting it.
Test the server:
- Test the server:
- In a web browser, use http://localhost:8080/opendap/
- Look at sample data files shipped with the server
- If you are installing the OLFS in conjunction with ncWMS2 version 2.0 or higher: Copy both the opendap.war and the ncWMS2.war files into the Tomcat webapps directory. (Re)Start Tomcat. Go read about, and then configure ncWMS2 and the OLFS to work together.
- From here, or if you are having problems, see our new Hyrax Manual and the older Hyrax documentation page
- ATTENTION - If you are upgrading Hyrax from any previous installation older than 1.16.5, read this!
The internal format of the olfs.xml file has been revised. No previous version off this file will work with Hyrax-1.16.5. In order to upgrade your system, move your old configuration directory aside (ex: mv /etc/olfs ~/olfs-OLD) and then follow the instruction to install a new OLFS. Once you have it installed and running you will need to review your old configuration and make the appropriate changes to the new olfs.xml to restore your server's behavior. The other OLFS configuration files have not undergone any structural changes and you may simply replace the new ones that were installed with copies of your previously working ones.
- To make the server restart when the host boots, use systemctl enable besd and systemctl enable tomcat or chkconfig besd on and chkconfig tomcat on depending on specifics of your Linux distribution
- libdap4 3.20.9, gpg signature
- BES 3.20.10, gpg signature
- Collected dependencies for Hyrax 1.16.5 (gpg signature) - This bundles the NetCDF, HDF4, HDF5, and other libraries that the Hyrax handlers require.
- OLFS 1.18.9 (requires Java 1.7)
- All of our source code is available at our GitHub site. There you will find the hyrax project repository, which is a meta-project that contains scripts to clone and build all of Hyrax. You will also see all of the repos that contain the Hyrax source code (libdap4, the bes and all of its handlers, and the olfs).
- Directions on building Hyrax from GitHub are available at our documentation site.
Snapshot builds from the Continuous Integration and Delivery (CI/CD) system are available in Docker images.
See our Docker Hub page for the latest "snapshot" CI/CD build of the server.