DESKTOP ONLY

[to feel the sun upon my core] requires WebLLM AI inference, orchestral synthesis,
and significant computational resources.

Please visit from a desktop or laptop computer.

[TO FEEL THE
SUN UPON
MY CORE]
Live Weather  ·  AI Generation  ·  Orchestral Synthesis
LOADING MODEL
[TO FEEL THE SUN UPON MY CORE]
Live Weather  ·  AI Inference  ·  Orchestral Synthesis

a continuous broadcast of solar sensation. the machine cycles through locations around the earth, reads the weather, considers the angle of the sun, and writes one sentence about what the atmosphere does to the body. an orchestra accompanies it.

weather layer live data from Open-Meteo (open-meteo.com) — temperature, wind, humidity, precipitation, UV index, shortwave radiation, cloud cover. fetched for a pseudo-random selection of ~130 locations worldwide every 30 seconds.
solar calculation sun elevation, solar time, and twilight phase are computed client-side from latitude/longitude using the solar position algorithm (Julian Date → ecliptic longitude → declination → hour angle → elevation). this determines which modal scale the drone inhabits — Lydian at noon, Locrian at night, Dorian at dawn.
AI inference — WebLLM SmolLM2-1.7B runs entirely in the browser via WebLLM (MLC-AI) and WebGPU. no server. the model receives a persona and a weather+solar context and is constrained to produce one JSON sentence about bodily sensation — not description, only what weather does to skin, lungs, nerves, vision. incoherent outputs are preserved as machine aphasia.
orchestral synthesis — Tone.js the drone is a five-voice string ensemble (violin I & II, viola, cello, harmonic partial). pitch root is derived from temperature; intervals from weather code (clear skies → major 7th, storm → semitone cluster); filter brightness from UV index and solar elevation; tremolo rate from wind speed; vibrato depth from humidity. the solo voice (French horn, oboe, or double bass, chosen by emotional analysis of the musing) plays a melody extracted by scoring the text for warmth, tension, breath, depth, and skin words. a piano inner voice accompanies.
whisper system when the musing contains body-words (bone, core, lung, skin, nerve…), a breath-texture synthesis layer opens — pink noise through a bandpass tremolo, three bandpass-filtered sine oscillators approximating /uh/ vowel formants, consonant pulse bursts timed to word count, and a body oscillator in the cello register. browser TTS plays at 4% volume as a near-silent carrier beneath the synthesis.
b-side — inhabited process [F] press F (or the A/B toggle) to enter the B-side. each 30-second cycle the machine's weather context is given to you as a prompt — the same location, conditions, and solar phase the model receives. you have 30 seconds to write one sentence about what that atmosphere does to your body. your sentence is processed through the same pipeline as the machine's: same musing validation, same solo instrument, same score archiving. the feed shows both interleaved without distinction. open your microphone to route your voice through the drone's formant chain — your breath is filtered by the current weather's harmonic state and your fundamental pitch nudges the drone root in real time.
every 30 minutes the session auto-exports a PDF orchestral score: full notation on treble/alto/bass staves with proper clefs, noteheads, stems, slurs, ledger lines, dynamics, and musing annotation beneath each system.
the broadcast does not stop  ·  the machine has no preference for daylight  ·  each cycle is thirty seconds  ·  this is not a performance
WebLLM  ·  Open-Meteo  ·  Tone.js  ·  Solar Position Algorithm
[TO FEEL THE SUN UPON MY CORE] standby solar —
drone
solo
whisper
cycles —
inhabited process  ·  you are a node
waiting for cycle…
drone state
root
solar
scale
pitch
mic  off
cycle submissions 0 machine  ·  0 human
BROADCAST SOLAR: —
NEXT: —