This repository contains the automation scripts for generating and updating drought-related data for the Eswatini Drought Map Hub. The script is designed to run periodically (ideally on a monthly basis) to ensure the data remains up-to-date.
- Overview
- Prerequisites
- Setup Instructions
- Running the Script
- Environment Variables
- Automation
- Contributing
- License
The Eswatini CDI Automation script (job.sh) is responsible for processing and updating drought-related data for the Eswatini Drought Map Hub. The Combined Drought Indicator (CDI) is an output dataset generated by executing the cdi-scripts developed by the National Drought Mitigation Center (NDMC).
The CDI integrates multiple drought-related indices (e.g., precipitation, soil moisture, land surface temperature, and vegetation health) to provide a comprehensive assessment of drought conditions. For more information about the CDI methodology, visit the NDMC website.
This script automates the execution of the cdi-scripts pipeline and ensures that the resulting CDI data is processed and uploaded to the Eswatini Drought Map Hub on a regular schedule (e.g., monthly).
Before running the script, ensure the following prerequisites are met:
- Operating System: The script is designed to run on Linux-based systems.
- Dependencies:
- Bash shell
- Required tools and libraries installed (e.g.,
curl,jq, etc.)
- Environment Configuration: A
.envfile must be created with the necessary environment variables (see Environment Variables).
- Clone the Repository:
git clone https://github.com/akvo/eswatini-droughtmap-hub-cdi.git
cd eswatini-droughtmap-hub-cdi- Set Up Environment Variables:
- Copy the example environment file to
.env:cp env.example .env
- Open the
.envfile and populate it with the required values:nano .env
- Install Dependencies: Ensure all required tools and libraries are installed. For example:
sudo apt-get update
sudo apt-get install curl jqTo execute the script manually, run the following command:
./src/background-job/job.sh- Ensure the
.envfile is properly configured before running the script. - The script should ideally be executed on a monthly basis to keep the data updated.
The script relies on the following environment variables, which must be defined in the .env file. These variables configure the data sources, authentication, and target systems for the automation process.
| Variable Name | Description | Example Value |
|---|---|---|
DOWNLOAD_CHIRPS_BASE_URL |
Base URL for downloading CHIRPS (Climate Hazards Group InfraRed Precipitation with Station) data. | https://data.chc.ucsb.edu/products/CHIRPS-2.0/global_monthly/tifs/ |
DOWNLOAD_CHIRPS_PATTERN |
File pattern to match CHIRPS data files (e.g., .tif.gz). |
.tif.gz |
DOWNLOAD_SM_BASE_URL |
Base URL for downloading Soil Moisture (SM) data from NASA's FLDAS dataset. | https://hydro1.gesdisc.eosdis.nasa.gov/data/FLDAS/FLDAS_NOAH01_C_GL_M.001/ |
DOWNLOAD_SM_PATTERN |
File pattern to match Soil Moisture data files (e.g., FLDAS.*\.nc). |
FLDAS.*\.nc |
DOWNLOAD_LST_BASE_URL |
Base URL for downloading Land Surface Temperature (LST) data from MODIS. | https://e4ftl01.cr.usgs.gov/MOLT/MOD21C3.061/ |
DOWNLOAD_LST_PATTERN |
File pattern to match LST data files (e.g., .hdf). |
.hdf |
DOWNLOAD_NDVI_BASE_URL |
Base URL for downloading Normalized Difference Vegetation Index (NDVI) data from MODIS. | https://e4ftl01.cr.usgs.gov/MOLT/MOD13C2.061/ |
DOWNLOAD_NDVI_PATTERN |
File pattern to match NDVI data files (e.g., .hdf). |
.hdf |
EARTHDATA_USERNAME |
Username for authenticating with NASA Earthdata services (required for downloading datasets). | yourusername |
EARTHDATA_PASSWORD |
Password for authenticating with NASA Earthdata services. | yourpassword |
GEONODE_URL |
Base URL of the GeoNode instance where processed data will be uploaded. | https://yourgeonodeinstance.com |
GEONODE_USERNAME |
Username or email for authenticating with the GeoNode instance. | yourgeonodeusernameoremail |
GEONODE_PASSWORD |
Password for authenticating with the GeoNode instance. | yourgeonodepassword |
- Ensure that all URLs are correct and accessible from your system.
- Replace placeholder values (e.g.,
yourusername,yourpassword) with actual credentials. - The file patterns (e.g.,
.tif.gz,.hdf) are used to identify specific files during the download process. Modify them only if the file naming conventions change.
To automate the execution of the script, you can use a cron job. Follow these steps:
- Open the crontab editor:
crontab -e- Add the following line to schedule the script to run monthly:
0 0 1 * * /path/to/repository/src/background-job/job.sh >> /path/to/logfile.log 2>&1- This example runs the script at midnight on the first day of every month.
- Replace
/path/to/repositorywith the actual path to your repository. - Logs will be appended to
/path/to/logfile.log.
- Save and exit the crontab editor.
We welcome contributions to improve this project! To contribute:
- Fork the repository.
- Create a new branch for your changes:
git checkout -b feature/your-feature-name- Commit your changes and push them to your fork:
git commit -m "Add your descriptive commit message"
git push origin feature/your-feature-name- Submit a pull request to the
mainbranch of this repository.
This project is licensed under the MIT License. See the LICENSE file for details.
If you have any questions or need further assistance, feel free to open an issue in this repository.