Skip to content

akvo/eswatini-droughtmap-hub-cdi

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

70 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Eswatini Drought Map Hub - CDI Automation

This repository contains the automation scripts for generating and updating drought-related data for the Eswatini Drought Map Hub. The script is designed to run periodically (ideally on a monthly basis) to ensure the data remains up-to-date.

Table of Contents

  1. Overview
  2. Prerequisites
  3. Setup Instructions
  4. Running the Script
  5. Environment Variables
  6. Automation
  7. Contributing
  8. License

Overview

The Eswatini CDI Automation script (job.sh) is responsible for processing and updating drought-related data for the Eswatini Drought Map Hub. The Combined Drought Indicator (CDI) is an output dataset generated by executing the cdi-scripts developed by the National Drought Mitigation Center (NDMC).

The CDI integrates multiple drought-related indices (e.g., precipitation, soil moisture, land surface temperature, and vegetation health) to provide a comprehensive assessment of drought conditions. For more information about the CDI methodology, visit the NDMC website.

This script automates the execution of the cdi-scripts pipeline and ensures that the resulting CDI data is processed and uploaded to the Eswatini Drought Map Hub on a regular schedule (e.g., monthly).


Prerequisites

Before running the script, ensure the following prerequisites are met:

  1. Operating System: The script is designed to run on Linux-based systems.
  2. Dependencies:
  • Bash shell
  • Required tools and libraries installed (e.g., curl, jq, etc.)
  1. Environment Configuration: A .env file must be created with the necessary environment variables (see Environment Variables).

Setup Instructions

  1. Clone the Repository:
git clone https://github.com/akvo/eswatini-droughtmap-hub-cdi.git
cd eswatini-droughtmap-hub-cdi
  1. Set Up Environment Variables:
  • Copy the example environment file to .env:
    cp env.example .env
  • Open the .env file and populate it with the required values:
    nano .env
  1. Install Dependencies: Ensure all required tools and libraries are installed. For example:
sudo apt-get update
sudo apt-get install curl jq

Running the Script

To execute the script manually, run the following command:

./src/background-job/job.sh

Notes:

  • Ensure the .env file is properly configured before running the script.
  • The script should ideally be executed on a monthly basis to keep the data updated.

Environment Variables

The script relies on the following environment variables, which must be defined in the .env file. These variables configure the data sources, authentication, and target systems for the automation process.

Variable Name Description Example Value
DOWNLOAD_CHIRPS_BASE_URL Base URL for downloading CHIRPS (Climate Hazards Group InfraRed Precipitation with Station) data. https://data.chc.ucsb.edu/products/CHIRPS-2.0/global_monthly/tifs/
DOWNLOAD_CHIRPS_PATTERN File pattern to match CHIRPS data files (e.g., .tif.gz). .tif.gz
DOWNLOAD_SM_BASE_URL Base URL for downloading Soil Moisture (SM) data from NASA's FLDAS dataset. https://hydro1.gesdisc.eosdis.nasa.gov/data/FLDAS/FLDAS_NOAH01_C_GL_M.001/
DOWNLOAD_SM_PATTERN File pattern to match Soil Moisture data files (e.g., FLDAS.*\.nc). FLDAS.*\.nc
DOWNLOAD_LST_BASE_URL Base URL for downloading Land Surface Temperature (LST) data from MODIS. https://e4ftl01.cr.usgs.gov/MOLT/MOD21C3.061/
DOWNLOAD_LST_PATTERN File pattern to match LST data files (e.g., .hdf). .hdf
DOWNLOAD_NDVI_BASE_URL Base URL for downloading Normalized Difference Vegetation Index (NDVI) data from MODIS. https://e4ftl01.cr.usgs.gov/MOLT/MOD13C2.061/
DOWNLOAD_NDVI_PATTERN File pattern to match NDVI data files (e.g., .hdf). .hdf
EARTHDATA_USERNAME Username for authenticating with NASA Earthdata services (required for downloading datasets). yourusername
EARTHDATA_PASSWORD Password for authenticating with NASA Earthdata services. yourpassword
GEONODE_URL Base URL of the GeoNode instance where processed data will be uploaded. https://yourgeonodeinstance.com
GEONODE_USERNAME Username or email for authenticating with the GeoNode instance. yourgeonodeusernameoremail
GEONODE_PASSWORD Password for authenticating with the GeoNode instance. yourgeonodepassword

Notes:

  • Ensure that all URLs are correct and accessible from your system.
  • Replace placeholder values (e.g., yourusername, yourpassword) with actual credentials.
  • The file patterns (e.g., .tif.gz, .hdf) are used to identify specific files during the download process. Modify them only if the file naming conventions change.

Automation

To automate the execution of the script, you can use a cron job. Follow these steps:

  1. Open the crontab editor:
crontab -e
  1. Add the following line to schedule the script to run monthly:
0 0 1 * * /path/to/repository/src/background-job/job.sh >> /path/to/logfile.log 2>&1
  • This example runs the script at midnight on the first day of every month.
  • Replace /path/to/repository with the actual path to your repository.
  • Logs will be appended to /path/to/logfile.log.
  1. Save and exit the crontab editor.

Contributing

We welcome contributions to improve this project! To contribute:

  1. Fork the repository.
  2. Create a new branch for your changes:
git checkout -b feature/your-feature-name
  1. Commit your changes and push them to your fork:
git commit -m "Add your descriptive commit message"
git push origin feature/your-feature-name
  1. Submit a pull request to the main branch of this repository.

License

This project is licensed under the MIT License. See the LICENSE file for details.


If you have any questions or need further assistance, feel free to open an issue in this repository.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published