With upwards of 7.5 million people suffering from speech impediments, it is imperative that low-cost efficient speech aid is developed. Conditions such as stroke ALS and cerebral palsy impair patients’ ability to speak. Some people, who can afford, use currently available cumbersome and inefficient devices such as eye/cheek trackers. In this research study, a speech aid known as a Silent Speech Interface (SSI) was created. This device could be used by patients with speech disorders to communicate voicelessly, merely by articulating words or sentences in the mouth without producing any sounds. This device captures and records subtle neurological activation of the speech muscles from the surface of the skin. In simpler terms, the SSI records electrical EMG signals in the Speech system. These EMG signals are then classified into speech in real-time using a trained Machine Learning model. This device has an accuracy of 80.1% and was developed for a cost of less than $100. Overall, this study involves the creation of a device that measures biological signals to determine what an individual wants to communicate and translates the collected signals into the English language using the SVM machine learning model.