- Demanding task environments (e.g., supervising a remotely piloted aircraft) require performing tasks quickly and accurately; however, periods of low and high operator workload can decrease task performance. Intelligent modulation of the system’s demands and interaction modality in response to changes in operator workload state may increase performance by avoiding undesirable workload states. This system requires real-time estimation of each workload component (i.e., cognitive, physical, visual, speech, and auditory) to adapt the correct modality. Existing workload systems estimate multiple workload components post-hoc, but none estimate speech workload, or function in real-time. This thesis presents an algorithm to estimate speech workload and mitigate undesirable workload states in real-time. The results from an analysis of the algorithm’s accuracy are presented, along with the results from evaluating the algorithm’s generalizability across individuals, human-machine teaming paradigms, and task environments (stationary and non-stationary). The ideal window sizes for real-time and offline use were determined. Results were presented from an assessment of the impact on performance resulting from adding physiological data and filler utterances to the base set of features. Real-time speech workload estimation is a crucial step towards adaptive human-machine systems.