Tools for VEX TM

December 25, 2025 · Vivaan M #Guides

As a past VEX competitor, I know how amazing the experience can be when you’re at a competition with a great atmosphere. This year, my school is hosting a VEX Competition, so we’ve decided we’re going to take it up a notch. Last year, we managed to live stream the entire event, and use static lights to light the field.

However, this year, its going to reach a new level. With multi-colour lighting, synchronised music, and automated field management. But how are we going to pull this off? Well, we’ll be using an application that I will be building. It’s aim is to connect the VEX TM Public API, allowing it to receive live events, such as matchStarted and trigger actions based on this.

Interacting with the VEX TM Public API

In order to sync everything up to the matches, we needed to get live data from the VEX Tournament Manager server. To do this, you need to use the VEX TM Public API (check out their documentation for API access). After applying for an API Token from DWAB, I identified the one part of the API I would use the most: the Field Set WebSocket, which provides us with live data on the events occurring in a field.

The VEX TM API uses HMAC-SHA256 signatures for authentication, which means every request needs to be properly signed. Here’s how I implemented the authentication:

 1import hmac
 2import hashlib
 3from datetime import datetime, timezone
 4
 5def create_signature(http_verb, uri_path, host, date, token, api_key):
 6    """
 7    Creates HMAC-SHA256 signature for VEX TM API requests.
 8    """
 9    string_to_sign = (
10        f"{http_verb.upper()}\n"
11        f"{uri_path}\n"
12        f"token:{token}\n"
13        f"host:{host}\n"
14        f"x-tm-date:{date}\n"
15    )
16    
17    signature = hmac.new(
18        api_key.encode(),
19        string_to_sign.encode(),
20        hashlib.sha256
21    ).hexdigest()
22    
23    return signature

Once authenticated, connecting to the WebSocket is straightforward. The WebSocket streams events like fieldQueued, matchStarted, audienceDisplayChanged, and more. Here’s a simplified version of the connector:

 1import asyncio
 2import websockets
 3import json
 4
 5async def connect_to_fieldset(base_url, field_set_id, token, signature, date):
 6    """
 7    Connects to VEX TM Field Set WebSocket and listens for events.
 8    """
 9    ws_url = f"wss://{base_url}/api/fieldsets/{field_set_id}"
10    
11    headers = {
12        "Authorization": f"Bearer {token}",
13        "x-tm-date": date,
14        "x-tm-signature": signature
15    }
16    
17    async with websockets.connect(ws_url, extra_headers=headers) as websocket:
18        print("Connected to VEX TM WebSocket!")
19        
20        while True:
21            message = await websocket.recv()
22            event = json.loads(message)
23            
24            # Process the event
25            print(f"Received event: {event['type']} for field {event.get('field')}")
26            await process_event(event)

Every event that comes through gets validated, normalized into an Event object, and pushed into a central event queue for processing. This architecture ensures we can handle multiple fields simultaneously without blocking.

Connecting via OSC to the Lighting Board

OSC. Open Sound Control. Why on earth would we be using a ‘Sound’ protocol to control Lighting? I have no clue. But it works.

OSC makes use of commands sent over a UDP port, and the ZerOS lighting console (the one we’re using) supports it natively. Commands follow a simple pattern:

1/zeros/playback/go/1 

This command would trigger playback 1 on the lighting console. Different commands allow you to fire cues, release playbacks, and control various lighting parameters.

Here’s how I implemented the OSC controller in Python using the python-osc library:

 1from pythonosc import udp_client
 2import logging
 3
 4class ZerOSController:
 5    def __init__(self, board_ip, port=8830):
 6        """
 7        Initialize connection to ZerOS lighting board via OSC.
 8        """
 9        self.board_ip = board_ip
10        self.port = port
11        self.client = udp_client.SimpleUDPClient(self.board_ip, self.port)
12        logging.info(f"Initialized OSC client for ZerOS at {board_ip}:{port}")
13    
14    def execute_action(self, action):
15        """
16        Execute a lighting action on the ZerOS board.
17        Actions contain preset_id, command, and target_type.
18        """
19        target_id = action.preset_id
20        target_type = action.target_type or 'playback'
21        command = action.command or 'go'
22        
23        # Construct OSC address: /zeros/<target_type>/<command>/<target_id>
24        address = f"/zeros/{target_type}/{command}/{target_id}"
25        
26        self.client.send_message(address, None)
27        logging.info(f"Sent OSC command to {address}")

With this setup, I can trigger different lighting scenes based on match events. For example:

  • Match Queued: Dim blue lights indicating the field is ready
  • Match Active: Bright white lights for full visibility
  • Match Complete: Pulsing green lights to celebrate

The beauty of OSC is that it’s fire-and-forget. Send the command and the lighting board handles the rest - no need to wait for responses or manage state.

Connecting to Spotify

Music makes everything better, and synchronized music makes it even better. I wanted the ability to play specific tracks from a playlist based on what’s happening in the competition. Enter Spotify’s API and the spotipy library.

The Spotify controller needed to do a few things:

  1. Authenticate using OAuth2
  2. Find the right playback device (we’re using a dedicated “TM-MUSIC” device)
  3. Play specific tracks or random tracks from a playlist
  4. Control playback (play, pause, skip)

Here’s the implementation:

 1import spotipy
 2from spotipy.oauth2 import SpotifyOAuth
 3import random
 4
 5class SpotifyController:
 6    def __init__(self, client_id, client_secret, redirect_uri, device_name=None):
 7        """
 8        Initialize Spotify controller with OAuth authentication.
 9        """
10        self.device_name = device_name
11        
12        # Authenticate with Spotify
13        self.sp = spotipy.Spotify(auth_manager=SpotifyOAuth(
14            client_id=client_id,
15            client_secret=client_secret,
16            redirect_uri=redirect_uri,
17            scope="user-modify-playback-state user-read-playback-state"
18        ))
19        
20        # Find and set the target device
21        self._set_device_id()
22    
23    def _set_device_id(self):
24        """
25        Find the Spotify device by name and set its ID.
26        """
27        devices = self.sp.devices()
28        
29        if self.device_name:
30            for device in devices['devices']:
31                if device['name'].lower() == self.device_name.lower():
32                    self.device_id = device['id']
33                    return
34        
35        # Fallback to first available device
36        self.device_id = devices['devices'][0]['id']
37    
38    def play_playlist_track(self, playlist_uri, track_number=None):
39        """
40        Play a specific track from a playlist, or a random one.
41        """
42        if track_number:
43            # Play specific track (1-indexed)
44            self.sp.start_playback(
45                device_id=self.device_id,
46                context_uri=playlist_uri,
47                offset={"position": track_number - 1}
48            )
49        else:
50            # Play random track from playlist
51            playlist = self.sp.playlist_items(playlist_uri)
52            total_tracks = playlist['total']
53            random_index = random.randint(0, total_tracks - 1)
54            
55            self.sp.start_playback(
56                device_id=self.device_id,
57                context_uri=playlist_uri,
58                offset={"position": random_index}
59            )

Now I can trigger different music based on match context - upbeat tracks during active matches, victory music when a match completes, and calm background music during breaks.

Connecting to an ATEM

For video production, we’re using a Blackmagic ATEM switcher to manage multiple camera feeds. The ATEM needs to automatically switch between cameras based on which field is active. I used the PyATEMMax library to control it:

 1import PyATEMMax
 2
 3class AtemController:
 4    def __init__(self, atem_ip):
 5        """
 6        Initialize connection to ATEM video switcher.
 7        """
 8        self.atem_ip = atem_ip
 9        self.atem = PyATEMMax.ATEMMax()
10        self._connect()
11    
12    def _connect(self):
13        """
14        Connect to the ATEM switcher.
15        """
16        self.atem.connect(self.atem_ip)
17        self.atem.waitForConnection(timeout=5)
18    
19    def execute_action(self, action):
20        """
21        Switch to a specific camera input.
22        """
23        camera_id = int(action.camera_id)
24        self.atem.changeProgramInput(camera_id)
25        logging.info(f"Switched ATEM to camera {camera_id}")

With this setup, when a match goes active on Field 1, the system automatically switches the program output to Camera 1, ensuring the live stream always shows the right field.

Linking it all together

The magic happens in the Event Processor - a central component that:

  1. Consumes events from the queue
  2. Updates field state (stored as JSON files)
  3. Looks up action mappings from actions.json
  4. Executes the appropriate actions on each controller

Here’s a simplified flow:

 1async def process_event(event_queue):
 2    """
 3    Main event processing loop.
 4    """
 5    while True:
 6        # Get next event from queue
 7        event = await event_queue.get()
 8        
 9        # Update field state if needed
10        if event.field:
11            field_state = update_field_state(event)
12        
13        # Look up mapped actions for this event
14        actions = get_actions_for_event(event.type, event.field, field_state)
15        
16        # Execute each action
17        for action in actions:
18            if action.type == "lighting":
19                zeros_controller.execute_action(action)
20            elif action.type == "video":
21                atem_controller.execute_action(action)
22            elif action.type == "audio":
23                spotify_controller.execute_action(action)
24        
25        # Log the event for audit trail
26        log_audit_entry(event, actions)

The action mappings live in a JSON configuration file, making it easy to customize without touching code:

 1{
 2    "on_event": {
 3        "fieldActivated": [
 4            {
 5                "match_name": "*",
 6                "fields": {
 7                    "1": [
 8                        {"type": "lighting", "preset_id": "13"},
 9                        {"type": "video", "camera_id": "1"},
10                        {"type": "audio", "command": "play"}
11                    ]
12                }
13            }
14        ]
15    }
16}

Beyond Automation: The Full System

While the core automation is impressive, the system needed more to be truly production-ready. Here’s what else I built:

Web Interface for Control and Monitoring

The Flask-based web interface gives operators full control without needing to touch code. The dashboard shows:

  • Real-time field status for all competition fields (queued, countdown, active, complete)
  • Manual override controls to trigger any lighting preset, camera switch, or audio action on demand
  • Live event stream showing recent events from the VEX TM API
  • System logs for troubleshooting when things inevitably go wrong

The interface also includes a configuration editor where you can modify device IPs, field-to-camera mappings, and action mappings on the fly. No need to SSH into the server or edit JSON files manually - everything is accessible through the browser.

Field State Tracking

Each competition field gets its own state file (field1.json, field2.json, etc.) that tracks:

  • Current state (standby → queued → countdown → active → finish)
  • Active match ID and name
  • Timestamps for state transitions
  • Team information for the current match

This persistent state means the system can recover gracefully from crashes - it knows exactly where each field left off. The event processor is the single source of truth for field state, ensuring consistency across all controllers.

Intelligent Match Scheduling

Two background threads work together to keep teams informed:

  1. Schedule Fetcher: Periodically pulls the latest tournament schedule from the VEX TM API and caches it locally. This ensures we have the data even if the TM server becomes temporarily unreachable.

  2. Match Scheduler: Monitors the cached schedule and automatically enqueues notification events when matches are approaching. Configurable lead time means teams get notified 5 minutes before their match (or whatever interval you set).

These notifications flow through the same event queue as everything else, triggering pop-ups on team room displays and potentially even audio announcements.

Public Room Pages for Teams

This was one of my favorite features to build. Teams don’t need any credentials - they just navigate to the room page, enter their room number, and get:

  • Embedded YouTube live stream of their assigned field
  • Automatic pop-up notifications when their next match is approaching
  • Match queue display showing upcoming matches for their team

Administrators can configure rooms through a separate management interface, assigning stream URLs and team lists. When the Match Scheduler determines a team’s match is imminent, a pop-up appears on their room page with match details - no manual coordination needed.

Pause Controls for Safety

During setup, testing, or when troubleshooting issues, you don’t want automation firing off unexpectedly. The pause controls let you selectively disable:

  • Video switching (manual camera control)
  • Audio playback (silence during setup)
  • Lighting changes (freeze current state)

These toggles live in config.json and are checked by the event processor before executing any action. Even better, changes take effect immediately - no need to restart the system. This was crucial during our testing when we needed to test one system at a time.

Final Thoughts

Building this system has been an incredible learning experience. From understanding authentication schemes and WebSocket protocols, to working with OSC, Spotify’s API, and video switchers - every component taught me something new.

The result? A fully automated competition production system that handles lighting, video, and audio synchronization in real-time. When a match starts, the lights change, the camera switches, and the music plays - all automatically. It’s like having a full production crew, but it’s just code.

For anyone interested, the full project is available on GitHub. Feel free to check it out, adapt it for your own events, or just see how everything fits together.


Have questions or ideas? Drop a comment below! I’d love to hear from other VEX teams or anyone working on similar event automation projects.

Loading comments...