I've been bilaterally profoundly deaf since I was 18 months old (an unfortunate side effect of a severe bacterial meningitis infection I contracted as a child), and I've been through more hearing aids than I'd care to count. 

While some recent developments pique my interest (streaming music to my hearing aid via Bluetooth is a luxury I wouldn't think twice about snapping up), this development from Columbia University in New York would be an immense game-changer for me and many other deaf people, who I'm sure are all too familiar with the 'cocktail party problem' – a relentless cacophony of hubbub/music/furniture rearrangement pounding our ear(s) when all we want to do is catch that imminent punchline being shared by someone just a couple of feet away.

Aside from the incredible technological developments involved, using artificial intelligence to separate voices and compare them with a listener’s brainwaves to identify and amplify the desired speaker, this innovation has the potential to dramatically improve the quality of life of millions of hearing-impaired people, making social, workplace, and other everyday situations, that much more accessible and enjoyable.

I, for one, look forward to hearing more. Or should that be less, as the case may be...?