Welcome to IMDAA Studio

Your guide to converting IMDAA reanalysis archives into research-ready CSV datasets.

What Can You Do?

Upload & Convert
Upload raw IMDAA ZIP archives and convert them into structured CSV files with live progress tracking.
Apply Corrections
Run scientific correction engines on your CSV data — temperature, radiation, humidity, wind speed, and more.
Convert Units
Transform units across your datasets — Kelvin to Celsius, W/m² to MJ/m², and other scientific conversions.
Go Offline
Install the PWA app and process unlimited files locally without any upload size limits.

Online vs Offline Mode

FeatureOnline (Cloud)Offline (Local)
Upload Limit3 GB per fileUnlimited
ProcessingCloud serverYour machine
Requires LoginYesOptional
Requires InstallNoYes (PWA + Node.js)
Internet RequiredYesNo

Uploading Archives

How to upload IMDAA reanalysis ZIP files for conversion.

Step-by-Step

1
Go to the Dashboard
Sign in to your account and navigate to the Home page. You'll see the upload area at the center of the screen.
2
Select Your Archive
Drag and drop your IMDAA .zip archive onto the upload zone, or click to browse. The file must contain NetCDF (.nc) files from IMDAA reanalysis datasets.
3
Configure & Submit
After the archive is inspected, you can choose your output format and variable selections. Click Submit to begin processing.
4
Download Results
Track progress in the My Requests page. Once complete, download your CSV files directly from the dashboard.
💡
You can re-use previously uploaded archives. Go to the upload step and select from your existing archive list instead of uploading again.

Tracking Requests

Monitor your conversion jobs in real time.

Request Statuses

StatusMeaning
QueuedYour request is waiting to be processed.
ProcessingThe server is actively converting your archive. You'll see a live progress bar.
CompletedAll CSV files are ready for download.
FailedSomething went wrong. Check the error message and try again.

What You Can Do

  • View Progress — Live percentage and ETA updates as your job runs.
  • Download Files — Once complete, download individual CSVs or all at once.
  • Delete Requests — Remove completed or failed requests from your history.
  • Re-run — Use the same archive to create new CSV variants with different settings.

Offline Mode

Process unlimited files locally on your own computer — no internet required.

📱
Offline mode is only available when you install IMDAA Studio as a PWA app. Visit the site in Chrome or Edge, then click "Install" in the browser's address bar.

How It Works

When you switch to Offline mode in the dashboard, IMDAA Studio connects to a local server running on your machine instead of the cloud. This removes the 3GB upload limit and processes everything locally.

Setting Up Offline Mode

1
Install the PWA App
Open IMDAA Studio in Chrome or Edge. Click the install icon in the address bar (or the "Install App" prompt). This adds the app to your desktop.
2
Install Node.js

Download and install Node.js (v18 or later) from nodejs.org.

3
Install IMDAA Studio Package

Open a terminal and run:

$ npm i -g imdaa-studio
4
Start the Local Server
$ imdaa-studio start

This starts the local server. Open the IMDAA Studio PWA app and switch to Offline mode using the toggle in the header.

⚠️
Keep the terminal window open while using offline mode. Closing it will stop the local server. To restart, run imdaa-studio start again.

Correction Engines

Apply scientific correction pipelines to your processed CSV data.

What Are Correction Engines?

IMDAA provides raw 3-hourly reanalysis data. Correction engines transform this raw data into daily, scientifically standardized products that are ready for research — converting units, aggregating timesteps, and applying domain-specific formulas.

How to Apply Corrections

1
Navigate to Corrections
Go to the Corrections page from the sidebar menu.
2
Select a Source File
Choose a processed CSV from your completed requests. The system will show which correction modules are compatible with your data.
3
Choose a Module
Select the correction engine to apply. Each module shows what it does and what outputs you'll receive.
4
Review & Download
Once processing completes, preview the results and download the corrected CSV files.

Available Modules

ModuleWhat It DoesOutput
🌡️ TemperatureConverts 3-hourly Kelvin data to daily °C productsTmean, Tmax, Tmin
☀️ RadiationIntegrates W/m² flux into daily energy totalsSW, LW, Net Radiation (MJ/m²/day)
💧 Relative HumidityConverts fractional RH to daily percentage statsRH mean, max, min (%)
☁️ Vapor PressureDerives actual vapor pressure from dewpointea (kPa)
🌬️ Wind SpeedComputes magnitude from U/V components at 2m heightDaily mean wind speed (m/s)
🌧️ Rainfall ErosivityEstimates R-factor from sub-daily rainfallErosivity index (MJ·mm/ha/hr)

Unit Conversion

Transform units across your CSV datasets with built-in scientific formulas.

How to Use

1
Open the Unit Converter
Navigate to the Unit Conversion page from the sidebar. You'll see the converter interface.
2
Upload Your CSV
Upload a processed CSV file. The system reads the columns and lets you select which ones to convert.
3
Choose Conversions
Select the source unit and target unit for each column. The formula used is shown transparently so you can verify the transformation.
4
Download Converted CSV
Click Download to get your unit-converted CSV with standardized column names and clear unit labels.

Common Conversions

FromToFormula
Kelvin (K)Celsius (°C)T(°C) = T(K) − 273.15
W/m²MJ/m²/dayE = flux × 0.0864
PakPaP(kPa) = P(Pa) / 1000
m/skm/hv(km/h) = v(m/s) × 3.6
kg/m²/smm/dayP(mm) = P(kg/m²/s) × 86400

File Management

Organize, download, and manage your processed archives and CSV outputs.

Your Workspace

The Files page shows all your archives and their associated outputs organized by upload. Each archive can have multiple CSV variants from different processing runs.

What You Can Do

  • Download — Download individual CSV files or all outputs from an archive at once.
  • Preview — Quick-view the first rows of any CSV to verify the data before downloading.
  • Re-process — Select an existing archive and run it again with different settings.
  • Delete — Remove files you no longer need to free up storage.
💡
Your storage quota depends on your membership plan. Check your Account page to see how much storage you have remaining.

Scientific Modules Reference

How each correction module transforms your IMDAA data.

🌡️ Temperature

Converts 3-hourly temperatures from Kelvin to Celsius, then produces daily aggregates.

UNIT CONVERSION
T(°C) = T(K) − 273.15

Products: Tmean (daily mean), Tmax (daily maximum), Tmin (daily minimum).

☀️ Radiation

Integrates instantaneous flux values (W/m²) into daily energy totals (MJ/m²/day).

FLUX INTEGRATION
R_daily = Σ(i=1 to 8) [ R_i × 0.03888 ]
where 0.03888 = 10800 seconds / 10⁶ (J → MJ)

Products: Net Shortwave, Net Longwave, Net Radiation.

💧 Relative Humidity

Converts fractional RH (0–1) to percentage and computes daily statistics.

CONVERSION
RH(%) = RH(fraction) × 100

Products: RH mean, RH max, RH min.

☁️ Vapor Pressure

Derives actual vapor pressure from dewpoint temperature using the Magnus formula.

MAGNUS FORMULA
ea = 0.6108 × exp[ (17.27 × Td) / (Td + 237.3) ]

Products: Daily mean actual vapor pressure (kPa).

🌬️ Wind Speed

Computes wind speed from U and V vector components with FAO-56 height adjustment.

VECTOR MAGNITUDE + HEIGHT ADJ.
WS = √(U² + V²)
WS_2m = WS_z × [ 4.87 / ln(67.8z − 5.42) ]

Products: Daily mean wind speed at 2m (m/s).

🌧️ Rainfall Erosivity

Estimates the R-factor (erosive potential) from sub-daily rainfall intensity.

R-FACTOR
R = Σ(EI₃₀) where EI₃₀ = kinetic energy × max 30-min intensity

Products: Daily precipitation (mm), Erosivity index.

🌱 Soil Moisture

Processes 4-layer Noah LSM soil moisture data (m³/m³) to compute layer-wise daily means, total soil water storage, and root zone averages.

SOIL WATER STORAGE
SWS = Σ(θᵢ × Zᵢ) → mm of water
Layers: L1 (0–10cm, 100mm) · L2 (10–35cm, 250mm) · L3 (35–100cm, 650mm) · L4 (100–300cm, 2000mm)
ROOT ZONE MOISTURE
θ_root = (θ₁×Z₁ + θ₂×Z₂ + θ₃×Z₃) / (Z₁ + Z₂ + Z₃) → m³/m³

Products: Layer-wise daily mean, Total SWS (mm), Root Zone moisture (0–1 m), Surface layer (0–10 cm), Monthly summaries.

Build Custom Correction Engines

Create your own scientific correction modules and share them with the research community.

🔧
Custom engines let you define your own scientific pipeline — input variables, formulas, output schema, and documentation — all uploadable from the Corrections page in the dashboard.

What You Need

Every custom correction module requires two files:

FileFormatPurpose
logic.pyPython (.py)The processing script with your scientific logic
theory.mdMarkdown (.md)Scientific documentation shown to users in the wizard

Input File Requirements

Your module will receive a processed IMDAA CSV file. These CSVs always contain:

ColumnTypeDescription
latFloatLatitude coordinate
lonFloatLongitude coordinate
timeDatetimeTimestamp of the observation
Variable columnsFloatThe actual data values (e.g., temperature, wind speed)

Python Script Structure

Your logic.py must define two things:

1. Metadata Constants

These tell the UI what your module needs and what it produces:

# Variables the user must map from their CSV REQUIRED_VARIABLES = ["Temperature", "Dewpoint"] # Output schema — what the module produces AGGREGATION_SCHEMA = { "baseline": { "options": [ {"id": "original", "label": "Original Values", "default": True} ] }, "correctionOutputs": { "aggregations": [ { "temporal": "daily", "label": "Daily", "types": [ {"id": "mean", "label": "Daily Mean", "default": True}, {"id": "max", "label": "Daily Max", "default": False} ] } ] } }

2. The apply_correction Function

This is the entry point the engine calls with the user's mapped data:

# This function is called by the engine def apply_correction(df, frequency="hourly", aggregations=None, **kwargs): """ df: Pandas DataFrame with metadata + mapped columns frequency: Data frequency ("hourly", "3hourly", etc.) aggregations: List of user-selected outputs """ # 1. Get variable columns (everything except metadata) _META = {"lat", "lon", "time", "station", "loc_id", "date"} var_cols = [c for c in df.columns if c.lower() not in _META] # 2. Your scientific computation here result = df[var_cols].mean() # Example # 3. Return a dict of filename → DataFrame return { "Daily_Mean_Output.csv": result_df }

Output Schema

Your function must return a dictionary where:

  • Keys = output filenames (e.g., "Daily_Mean.csv")
  • Values = Pandas DataFrames with the computed results

Each DataFrame should include lat, lon, date columns plus your computed values. The engine handles storage and download links automatically.

Writing Theory Documentation

The theory.md file is shown to users in the correction wizard. It should include:

1
Overview
What the module does, which variables it processes, and what scientific products it generates.
2
Calculation Logic
The formulas used. Use LaTeX notation: $$ T(°C) = T(K) - 273.15 $$
3
Expected Outputs
List every output file the module can generate, with column names and units.

Getting Started Template

Copy this starter template and modify it for your use case:

# my_custom_module/logic.py import pandas as pd import numpy as np REQUIRED_VARIABLES = ["MyVariable"] AGGREGATION_SCHEMA = { "baseline": { "options": [ {"id": "original", "label": "Original Data", "default": True} ] }, "correctionOutputs": { "aggregations": [ { "temporal": "daily", "label": "Daily Products", "types": [ {"id": "daily_mean", "label": "Daily Mean", "default": True} ] } ] } } def apply_correction(df, frequency="hourly", aggregations=None, **kwargs): _META = {"lat", "lon", "time", "station", "loc_id", "date"} var_cols = [c for c in df.columns if c.lower() not in _META] # Group by location and date, compute daily mean df["date"] = pd.to_datetime(df["time"]).dt.date daily = df.groupby(["lat", "lon", "date"])[var_cols].mean().reset_index() daily.columns = ["lat", "lon", "date"] + [f"{c}_daily_mean" for c in var_cols] return {"Daily_Mean.csv": daily}

How to Upload

1
Go to Corrections Page
Navigate to Corrections in the dashboard sidebar.
2
Fill in Module Details
Enter a label (e.g., "Bias Correction v2"), upload your logic.py and theory.md files.
3
Share (Optional)
Check "Share with Community" to make your module available in the global research library for other users.
4
Upload
Click UPLOAD_SCIENTIFIC_MODULE. Your module will appear in the correction engine dropdown immediately.
⚠️
Your Python script runs in a sandboxed environment. Only pandas, numpy, and scipy are available. If you need additional packages, contact the admin.

Frequently Asked Questions

Common questions about using IMDAA Studio.

What file formats can I upload?

IMDAA Studio accepts .zip archives containing NetCDF (.nc) files from the IMDAA reanalysis dataset.

What is the maximum upload size?

In Online mode, the limit is 3 GB per file. In Offline mode, there is no limit — processing happens locally on your machine.

Do I need Python installed?

Only if you use Offline mode. The local server uses Python for scientific data processing. Python 3.9 or later is required. Online mode handles everything on the cloud.

Can I re-use uploaded archives?

Yes. Previously uploaded archives are saved in your workspace. You can select them again during the upload step to create new CSV variants without re-uploading.

What happens to my data?

In Online mode, your data is stored securely on our servers during processing and is only accessible from your authenticated account. In Offline mode, all data stays on your local machine.

How do I switch between Online and Offline mode?

Use the mode toggle in the dashboard header. Offline mode only appears if you've installed the IMDAA Studio PWA app.

What if the offline server won't start?

Run imdaa-studio doctor in your terminal to diagnose common issues. Make sure Node.js v18+ and Python 3.9+ are installed.