The "NetCDF File Structure and Performance" chapter provides a less formal explanation of the format of netCDF data to help clarify the performance implications of different data organizations. If users only access netCDF data through the documented interfaces, future changes to the format will be transparent.
Often, it's as easy as running. We recently Summer of refactored the core building of the netCDF library. Unfortunately this hopelessly broke the existing port to Microsoft Visual Studio. Resources permitting, the development of a new Visual Studio port will be undertaken in the second half of at Unidata.
Until then, no Visual Studio port of the latest version of the library is available. We understand that Windows users are most comfortable with a Visual Studio build, and we intend to provide one. Unidata is a community supported organization, and we welcome collaboration with users who would like to assist with the windows port.
Nikolay Khabarov has contributed documentation describing a netCDF The netCDF classic format and bit offset format are fully supported. Links are provided to compiled bit and bit DLLs and static libraries. Another developer has contributed an unsupported native Windows build of netCDF The announcement of the availability of that port is here. User Veit Eitner has contributed a port of 4. This port was done before the code was refactored in 4.
User contributions of ports to F90 windows compilers are very welcome send them to suppo. The easiest course is to download one of the pre-built DLLs and utilities and just install them on your system.
These are now available from the Unidata FTP site:. Otherwise the properties of the netcdf project must be changed to include the proper header directory. Both the debug and release builds work. The release build links to different system libraries on Windows, and will not allow debuggers to step into netCDF library code. This is the build most users will be interested in. The debug build is probably of interest only to netCDF library developers. As of version 4.
See the ifort entry in other builds document. Windows is a complicated platform to build on. Some useful explanations of the oddities of Windows can be found here:. Net developers a way to read, write and share scalars, vectors, and multidimensional grids using CSV, netCDF, and other file formats. It currently uses netCDF version 4. In addition to. Net libraries, SDS provides a set of utilities and packages: an sds command line utility, a DataSet Viewer application and an add-in for Microsoft Excel and later versions.
We make build output from various platforms available for comparison with your output. In general, you can ignore compiler warnings if the "make test" step is successful. The netCDF installation directory can be set at the time configure is run using the —prefix argument.
In different contexts, "netCDF" may refer to a data model, a software implementation with associated application program interfaces APIs , or a data format. Confusion may arise in discussions of different versions of the data models, software, and formats.
For example, compatibility commitments require that new versions of the software support all previous versions of the format and data model. This section of FAQs is intended to clarify netCDF versions and help users determine what version to build and install. The classic format was the only format for netCDF data created between and by the reference software from Unidata.
This format is also referred as CDF-1 format. In , the bit offset format variant was added. Nearly identical to netCDF classic format, it allows users to create and access far larger datasets than were possible with the original format.
A bit platform is not required to write or read bit offset netCDF files. This format is also referred as CDF-2 format. In , the netCDF-4 format was added to support per-variable compression, multiple unlimited dimensions, more complex data types, and better performance, by layering an enhanced netCDF access interface on top of the HDF5 format. At the same time, a fourth format variant, netCDF-4 classic model format , was added for users who needed the performance benefits of the new format such as compression without the complexity of a new programming interface or enhanced data model.
In , the bit data format variant was added. To support large variables with more than 4-billion array elements, it replaces most of the bit integers used in the format specification with bit integers. It also adds support for several new data types including unsigned byte, unsigned short, unsigned int, signed bit int and unsigned bit int. A bit platform is required to write or read bit data netCDF files.
This format is also referred as CDF-5 format. With each additional format variant, the C-based reference software from Unidata has continued to support access to data stored in previous formats transparently, and to also support programs written using previous programming interfaces.
We will use these shorter phrases in FAQs below when no confusion is likely. The short answer is that under most circumstances, you should not care, if you use version 4. HDF5 files may also begin with a user-block of , , , With netCDF version 4. Finally, on a Unix system, one way to display the first four bytes of a file, say foo. The enhanced model sometimes also referred to as the netCDF-4 data model is an extension of the classic model that adds more powerful forms of data representation and data types at the expense of some additional complexity.
Although data represented with the classic model can also be represented using the enhanced model, datasets that use enhanced model features, such as user-defined data types, cannot be represented with the classic model. Use of the enhanced model requires storage in the netCDF-4 format. When netCDF version 4. Version 3. Version 4. Software built from the netCDF Starting from version 4.
Unidata no longer supports a separate netCDFonly version of the software, but instead supports both the classic and enhanced data models and all four format variants in a single source distribution.
Installing the simpler netCDF-3 version of the software is recommended if the following situations apply:. The enhanced model sometimes referred to as the netCDF-4 data model is an extension to the classic model that adds more powerful forms of data representation and data types at the expense of some additional complexity. Specifically, it adds six new primitive data types, four kinds of user-defined data types, multiple unlimited dimensions, and groups to organize data hierarchically and provide scopes for names.
A picture of the enhanced data model, with the extensions to the classic model highlighted in red, is available from the online netCDF workshop. Although data represented with the classic model can also be represented using the enhanced model, datasets that use features of the enhanced model, such as user-defined data types, cannot be represented with the classic model.
Use of added features of the enhanced model requires that data be stored in the netCDF-4 format. If you built the software from source without access to an HDF5 library, then only the netCDF-3 library was built and installed. No changes to the program source are needed, because the library handles decompressing data as it is accessed. The nccopy utility in versions 4. To do this within a program, or if you want different variables to have different levels of deflation, define compression properties when each variable is defined.
Although default variable chunking parameters may be adequate, compression can sometimes be improved by choosing good chunking parameters when a variable is first defined. For example, if a 3D field tends to vary a lot with vertical level, but not so much within a horizontal slice corresponding to a single level, then defining chunks to be all or part of a horizontal slice would typically produce better compression than chunks that included multiple horizontal slices.
There are other factors in choosing chunk sizes, especially matching how the data will be accessed most frequently. Chunking properties may only be specified when a variable is first defined. An example is available demonstrating some of the new functions. ArcGIS But if you want to convert a classic format file CDF-1, 2, or 5 into a netCDF-4 format or netCDF-4 classic model format file, the easiest way is to use the nccopy utility.
For example to convert a classic format file foo3. To convert a classic format file foo3. Another method is available for relatively small files, using the ncdump and ncgen utilities built with a netCDF-4 library. Assuming "small3. NetCDF-4 classic model files that use compression can be smaller than the equivalent netCDF-3 files, so downloads are quicker. If they are then unpacked and converted to the equivalent netCDF-3 files, they can be accessed by applications that haven't yet upgraded to netCDF In general, you can't, because netCDF-4 files may have features of the netCDF enhanced data model, such as groups, compound types, variable-length types, or multiple unlimited dimensions, for which no netCDF-3 representation is available.
However, if you know that a netCDF-4 file conforms to the classic model, either because it was written as a netCDF-4 classic model file, because the program that wrote it was a netCDF-3 program that was merely relinked to a netCDF-4 library, or because no features of the enhanced model were used in writing the file, then there are several ways to convert it to a netCDF-3 file.
You can use the nccopy utility. For example to convert a netCDF-4 classic-model format file foo4c. For a relatively small netCDF-4 classic model file, "small4c. If you know that an HDF5 file conforms to the netCDF-4 enhanced data model, either because it was written with netCDF function calls or because it doesn't make use of HDF5 features in the list above, then it can be accessed using netCDF-4, and analyzed, visualized, and manipulated through other applications that can access netCDF-4 files.
The file extension used for netCDF files is purely a convention. The netCDF libraries don't use the file extension. A user can currently create a netCDF file with any extension, even one not consistent with the format of the file.
The ncgen utility uses ". Recent versions of ncgen also have a "-k" option to specify what kind of output file is desired, selecting any of the 4 format variants, using either a numeric code or a text string. Most other netCDF client software pays no attention to the file extension, so using more explicit extensions by convention has no significant drawbacks, except possibly causing confusion about format differences that may not be important. Until widely used netCDF client software has been adapted or upgraded to read netCDF-4 data, classic file format is the default for interoperability with most existing netCDF software.
Earlier versions of the netCDF libraries have always been able to read data with arbitrary characters in names. The restriction has been on creating files with names that contained "invalid" special characters.
The check for characters used in names occurred when a program tried to define a new variable, dimension, or attribute, and an error would be returned if the characters in the names didn't follow the rules. However, there has never been any such check on reading data, so arbitrary characters have been permitted in names created through a different implementation of the netCDF APIs, or through early versions of netCDF software before 2.
All old files are still readable and writable by the new software, and programs that used to work will still work when recompiled and relinked with the new libraries. Files using new characters in names will still be readable and writable by programs that used older versions of the libraries.
However, programs linked to older library versions will not be able to create new data objects with the new less-restrictive names. Modifying an application to fully support the new enhanced data model may be relatively easy or arbitrarily difficult :- , depending on what your application does and how it is written. Use of recursion is the easiest way to handle nested groups and nested user-defined types.
An object-oriented architecture is also helpful in dealing with user-defined types. User contributions of ports to F90 windows compilers are very welcome send them to support-netcdf unidata. The easiest course is to download one of the pre-built DLLs and utilities and just install them on your system. These are now available from the Unidata FTP site:. Otherwise the properties of the netcdf project must be changed to include the proper header directory.
Both the debug and release builds work. The release build links to different system libraries on Windows, and will not allow debuggers to step into netCDF library code. This is the build most users will be interested in.
The debug build is probably of interest only to netCDF library developers. As of version 4. See the ifort entry in other builds document. Windows is a complicated platform to build on. Some useful explanations of the oddities of Windows can be found here:. Net developers a way to read, write and share scalars, vectors, and multidimensional grids using CSV, netCDF, and other file formats. It currently uses netCDF version 4. In addition to.
Net libraries, SDS provides a set of utilities and packages: an sds command line utility, a DataSet Viewer application and an add-in for Microsoft Excel and later versions.
We make build output from various platforms available for comparison with your output. In general, you can ignore compiler warnings if the "make test" step is successful. The netCDF installation directory can be set at the time configure is run using the --prefix argument. In different contexts, "netCDF" may refer to a data model, a software implementation with associated application program interfaces APIs , or a data format. Confusion may arise in discussions of different versions of the data models, software, and formats.
For example, compatibility commitments require that new versions of the software support all previous versions of the format and data model. This section of FAQs is intended to clarify netCDF versions and help users determine what version to build and install. The classic format was the only format for netCDF data created between and by the reference software from Unidata.
This format is also referred as CDF-1 format. In , the bit offset format variant was added. Nearly identical to netCDF classic format, it allows users to create and access far larger datasets than were possible with the original format. A bit platform is not required to write or read bit offset netCDF files. This format is also referred as CDF-2 format. In , the netCDF-4 format was added to support per-variable compression, multiple unlimited dimensions, more complex data types, and better performance, by layering an enhanced netCDF access interface on top of the HDF5 format.
At the same time, a fourth format variant, netCDF-4 classic model format , was added for users who needed the performance benefits of the new format such as compression without the complexity of a new programming interface or enhanced data model. In , the bit data format variant was added. To support large variables with more than 4-billion array elements, it replaces most of the bit integers used in the format specification with bit integers.
It also adds support for several new data types including unsigned byte, unsigned short, unsigned int, signed bit int and unsigned bit int. A bit platform is required to write or read bit data netCDF files. This format is also referred as CDF-5 format. With each additional format variant, the C-based reference software from Unidata has continued to support access to data stored in previous formats transparently, and to also support programs written using previous programming interfaces.
We will use these shorter phrases in FAQs below when no confusion is likely. The short answer is that under most circumstances, you should not care, if you use version 4.
HDF5 files may also begin with a user-block of , , , With netCDF version 4. Finally, on a Unix system, one way to display the first four bytes of a file, say foo. The enhanced model sometimes also referred to as the netCDF-4 data model is an extension of the classic model that adds more powerful forms of data representation and data types at the expense of some additional complexity.
Although data represented with the classic model can also be represented using the enhanced model, datasets that use enhanced model features, such as user-defined data types, cannot be represented with the classic model. Use of the enhanced model requires storage in the netCDF-4 format.
When netCDF version 4. Version 3. Version 4. Software built from the netCDF Starting from version 4. Unidata no longer supports a separate netCDFonly version of the software, but instead supports both the classic and enhanced data models and all four format variants in a single source distribution. Installing the simpler netCDF-3 version of the software is recommended if the following situations apply:. Installing the netCDF-4 version of the software is required for any of the following situations:.
The enhanced model sometimes referred to as the netCDF-4 data model is an extension to the classic model that adds more powerful forms of data representation and data types at the expense of some additional complexity.
Specifically, it adds six new primitive data types, four kinds of user-defined data types, multiple unlimited dimensions, and groups to organize data hierarchically and provide scopes for names. A picture of the enhanced data model, with the extensions to the classic model highlighted in red, is available from the online netCDF workshop. Although data represented with the classic model can also be represented using the enhanced model, datasets that use features of the enhanced model, such as user-defined data types, cannot be represented with the classic model.
Use of added features of the enhanced model requires that data be stored in the netCDF-4 format. If you built the software from source without access to an HDF5 library, then only the netCDF-3 library was built and installed. No changes to the program source are needed, because the library handles decompressing data as it is accessed. The nccopy utility in versions 4. To do this within a program, or if you want different variables to have different levels of deflation, define compression properties when each variable is defined.
Although default variable chunking parameters may be adequate, compression can sometimes be improved by choosing good chunking parameters when a variable is first defined. For example, if a 3D field tends to vary a lot with vertical level, but not so much within a horizontal slice corresponding to a single level, then defining chunks to be all or part of a horizontal slice would typically produce better compression than chunks that included multiple horizontal slices. There are other factors in choosing chunk sizes, especially matching how the data will be accessed most frequently.
Chunking properties may only be specified when a variable is first defined. An example is available demonstrating some of the new functions. R has the ncdf4 package. Python has the netcdf4-python package.
ArcGIS But if you want to convert a classic format file CDF-1, 2, or 5 into a netCDF-4 format or netCDF-4 classic model format file, the easiest way is to use the nccopy utility. For example to convert a classic format file foo3. To convert a classic format file foo3. Another method is available for relatively small files, using the ncdump and ncgen utilities built with a netCDF-4 library.
Assuming "small3. NetCDF-4 classic model files that use compression can be smaller than the equivalent netCDF-3 files, so downloads are quicker. If they are then unpacked and converted to the equivalent netCDF-3 files, they can be accessed by applications that haven't yet upgraded to netCDF In general, you can't, because netCDF-4 files may have features of the netCDF enhanced data model, such as groups, compound types, variable-length types, or multiple unlimited dimensions, for which no netCDF-3 representation is available.
However, if you know that a netCDF-4 file conforms to the classic model, either because it was written as a netCDF-4 classic model file, because the program that wrote it was a netCDF-3 program that was merely relinked to a netCDF-4 library, or because no features of the enhanced model were used in writing the file, then there are several ways to convert it to a netCDF-3 file. You can use the nccopy utility. For example to convert a netCDF-4 classic-model format file foo4c.
For a relatively small netCDF-4 classic model file, "small4c. If you know that an HDF5 file conforms to the netCDF-4 enhanced data model, either because it was written with netCDF function calls or because it doesn't make use of HDF5 features in the list above, then it can be accessed using netCDF-4, and analyzed, visualized, and manipulated through other applications that can access netCDF-4 files.
The file extension used for netCDF files is purely a convention. The netCDF libraries don't use the file extension. A user can currently create a netCDF file with any extension, even one not consistent with the format of the file.
The ncgen utility uses ". Recent versions of ncgen also have a "-k" option to specify what kind of output file is desired, selecting any of the 4 format variants, using either a numeric code or a text string. Most other netCDF client software pays no attention to the file extension, so using more explicit extensions by convention has no significant drawbacks, except possibly causing confusion about format differences that may not be important. Until widely used netCDF client software has been adapted or upgraded to read netCDF-4 data, classic file format is the default for interoperability with most existing netCDF software.
The number used for the shared library name is not related to the netCDF library version number. Earlier versions of the netCDF libraries have always been able to read data with arbitrary characters in names. The restriction has been on creating files with names that contained "invalid" special characters. The check for characters used in names occurred when a program tried to define a new variable, dimension, or attribute, and an error would be returned if the characters in the names didn't follow the rules.
However, there has never been any such check on reading data, so arbitrary characters have been permitted in names created through a different implementation of the netCDF APIs, or through early versions of netCDF software before 2. All old files are still readable and writable by the new software, and programs that used to work will still work when recompiled and relinked with the new libraries.
Files using new characters in names will still be readable and writable by programs that used older versions of the libraries. However, programs linked to older library versions will not be able to create new data objects with the new less-restrictive names. Modifying an application to fully support the new enhanced data model may be relatively easy or arbitrarily difficult :- , depending on what your application does and how it is written. Use of recursion is the easiest way to handle nested groups and nested user-defined types.
An object-oriented architecture is also helpful in dealing with user-defined types. We recommend proceeding incrementally, supporting features that are easier to implement first. For example, handling the six new primitive types is relatively straightforward. After that, using recursion or the group iterator interface used in nccopy to support Groups is not too difficult. Providing support for user-defined types is more of a challenge, especially since they can be nested.
The utility program nccopy , provided in releases 4. Previous: Build Problems , Up: Top. If you don't specify this, the configure script will try to find a suitable C compiler. The default choice is gcc. If you wish to use a vendor compiler you must set CC to that compiler, and set other environment variables as described below to appropriate settings. If you don't specify this, the configure script will try to find a suitable Fortran and Fortran 77 compiler.
0コメント