Authors: Sheng-Hsiou (Shawn) Hsu, Dan Furman
This is a continuation of our previous post where we explain brain-computer interfaces in five levels of difficulty, following WIRED’s style. We recommend you first read Part 1 for levels 1 through 3. Now let’s dive into levels 4 and 5.
Level 4: Grad Student
With the previous background knowledge, as a 'graduate student' for the purpose of this blog post's structure, you might wonder what the key research questions in the BCI field are. What are challenges that are valuable and may be solvable in the next 3-5 years? In other words, what could be worthwhile questions to work on for a Masters of Ph.D. All three pillars of BCI, hardware, algorithms and applications, have ripe questions here at the close of 2023.
Below we go more into some of these below, in a word these are hardware challenges, which abound in the size, weight, power, cost of BCI devices; software challenges where the overarching mission of understanding the brain is paramount (and possibly an impossible quest), and in applications the challenge of effectively achieving the user's goal for use, i.e. quality, is all that matters. All in all combine to create the value of BCI.
Hardware: SWaP-C Challenges to Mainstream BCI
Historically, computer technology has gotten smaller and smaller. The almighty ‘miniaturization’ principle. Though it may not sound profound, it is, and this trend is one of the reasons why BCI technology is now becoming available to people for everyday use. Electronic technology keeps getting smaller and smaller mainly because smaller devices fit into our lifestyles better. The computer industry is now regularly etching precise, detailed designs on the order of nanometers into silicon chips so our phones and computers can be lighter while also being more powerful: to emphasize — nanometers with precision — a supremely amazing feat.
It's fair to say that as a species we have been fairly obsessed with miniaturization of our technology and very successful in scratching that itch. An extraordinary amount of engineering imagination and efforts have gone into shrinking components and making the already small still smaller, and the trend continues. The computer scientist Gordon Bell found that miniaturization is actually happening in regular intervals over time — generating new classes of computers in the process that, in his words:
“…bring with them new markets, ecosystems, and — most importantly — new types of users.”
There is a natural combining that is currently underway with BCI technology being embedded in headphones, earbuds and AR/VR form factors converging to the human head and globally opening the era of BCI in consumer electronics. The electronics for amplification, filtering, digitization, communication underpinning these new capabilities have been consistently miniaturized, but brain sensors have been more constrained due to the biophysical properties of conductive ionic currents on the skin where sensor surface area is correlated with signal-to-noise-ratio (SNR). As sensors become integrated with existing technology and head-wear form factors the size won't matter much since it will be small enough to be functional invisible to the user.
Devices that are barely visible to the human eye already are designed to fit seamlessly into, and around, our bodies; the progression trends straightly towards the nanoscale. And already, these implanted sensors have allowed humans and computers to interact with each other directly. These new types of symbiotic relationships transform not only daily lives, but our entire species in the process; it’s as if one day we will all grow another limb or develop, theoretically, a new sense. According to Bell, we are approaching the apex of small electronics. With that convergence to the minimum scale for electronics there may then be inversion: with technology going inward and contorting itself in new dimensions. As in, into our bodies — implants that seamlessly, for example, measure from the brain with a Young’s Modulus matching jiggle, so the implant does not shear and damage tissue as it naturally jiggles about in daily life.
The key research challenge in this pillar is how to make BCI sensors and devices easy to use, comfortable, affordable, unobtrusive, and reliable for long-term monitoring in daily life. In the past one to two decades, there have been efforts in both academic and industry and significant advancement in both invasive and non-invasive BCI technology. On the research side, continual breakthroughs have been made in materials of sensors that are biocompatible, flexible, and can form stable contact with skin - in the form of tattoos or fabrics for example. Another important direction is how to miniaturize the sensors and devices such that they are easy to put on by the users themselves, comfortable to wear, and can last for a long time, all the while recording reliable signals from the brain with sufficient spatial coverage.
We now see more companies tackling this particular challenge, delivering BCI products in the form of earbuds, headphones, eyewear, or that integrate with a VR headset or a hearing aid. Lastly, it’s important to point out the breakthrough in invasive technology. Research labs and companies are pushing the boundary that maximizes the number of sensors and their spatial coverage of implanted devices while minimizing the risk and costs of surgical operations and long-term use.
These are highly active research and development areas that will likely have many breakthroughs in the next 5 years. One framework for thinking about where to focus as a graduate student in a hardware field is the acronym: SWaP-Cs. Which stands for Size, Weight, Power, Cost — the goal is to decrease all of these variables. There's a dual meaning to it also: Style, Washability, Prestige, Comfort. Perhaps the areas that will advance BCI most today are in these alternative SWaP Cs where sexy, cool and comfortable meet in headwear products like glasses or earrings, hats, beanies and bindis, all supreme expressions of what a BCI can be. What will BCI devices look like in the future? Inspired by Mark Weiser’s view, we believe BCI will disappear into the fabric of everyday life and be indistinguishable from it. At Arctop we think of the technology as being an extension of cognition – of our everyday thinking, expanded and extended, embodied in the environment and technologies around it - without being felt as a technology. That's how seamless it should be.
Without the capability of remote sensing at a distance away from the brain activities, BCI sensors will need to be around, on, or inside the head. With such constraints, there is limited real estate of the head where people can (and are willing to) wear a device for a long time. This is why we are already seeing and will see even more in the near future, that BCI technology being miniaturized and integrated into existing eyewear or headwear, including eyeglasses, headphones, earbuds, hearing aids, headbands, helmets, and VR / AR headsets is the trend.
Another trend is invasive technology - making implants smaller, safer, and more affordable. For now, the risk of invasive technology may only be justified by clinical or medical use for those in need, and the technology development still needs to go through rigorous clinical validation and regulatory processes to de-risk the adverse effects in long-term use. But in the long run, when it reaches a similar risk level as replacing a missing tooth, in exchange for unlocking greater human abilities, would you be willing to do so? We might be.
Software: Algorithms for Decoding Brain Data and Infrastructure
The decoding algorithms of BCI technology have mainly used digital signal processing and statistical machine learning techniques. Artificial neural networks have not worked so great traditionally for real neural network activity, but with recent breakthroughs in artificial intelligence (specifically deep learning methods) for text, image, and speech recognition, there has been an outpour of publications applying deep-learning methods to biosignals. But deep-learning has not been a magic wand that magically solves the recognition problem for brain activities yet. Why? The key challenges lay in both the unique data and algorithms.
On the data side, unlike text and images, brain data has limited numbers of public, large-scale datasets for training models and benchmarking performance on. These datasets often have their own data acquisition hardware and idiosyncratic data collection protocols which introduce more variability. There is also a lack of standards on metrics and criteria for controlling signal quality and consensus for data cleaning methods despite wide consensus on the need for cleaning the always noisy signal. Lastly, limited "ground truth" labels are available and they often have low temporal resolution with high variability (e.g. disagreement even between “experts”).
But in the last decade, significant efforts have gone into addressing these challenges and we are optimistic that these are likely solvable in the near future. For example, some BCI research communities and non-profit organizations have started to publish their datasets (OpenNeuro) and open-source codebase (NeuroTechX) to establish and follow a standard for data and label formats, for example 'EEG-BIDS' (Pernet et al 2019). Effective methods for handling artifacts and noises in EEG in the real-world setting have been proposed and evaluated (Chang et al 2019), and additional data modalities are being used (e.g. behaviors from phones, camera, physiological signals) to provide rich context and automated labels for the EEG. With these advances, we may be able to reach a critical point with enough data and labels that can drive the breakthrough in algorithm development.
On the algorithm side, for readers who are AI researchers and developers wanting to transfer AI breakthroughs from other domains and solve the brain’s grand challenge, there are a few unique challenges for brain signals. First is the “context” or labels during which brain signals are collected. Unlike an image of a dog where the ground truth is unambiguous and universal, the ground truth for a period of brain signal can be noisy and subjective. There is not, for example, consensus about how many emotions humans can feel or the best way to establish the timing of when they are being felt. Hence approaches like self-supervised learning and multi-modal data for automated label generation are rising and necessary.
The second challenge is that the brain activities are always changing. Coming from sources throughout the brain, that overlap and transient signals that propagate in complex ways, there is a lot to analyze. Interestingly, brain activities seem to transition from a stationary state to another, nonstationary state at unbound timescales, like progression of sleep stages or a sequence of thoughts. But these brain states do not have fixed time intervals. Learning the “unit” of brain states, like learning the “vocabulary” of speech, may be the key to reduce the complexity of the brain decoding challenge and relax the data requirements. By transforming the problem from decoding time-series data to sequence data (e.g. leverage methods like brain-state modeling, Hsu et al 2018, 2022 or speech-to-unit translation) dimensions can be reduced and patterns can be modeled more effectively from first principles, obviating the need for massive data sets thay require more parsing.
The third challenge is the “human factor” that changes the BCI performance from day to day or people-to-people due to differences in brain anatomy, sensor locations, or users’ states like attention, emotion, and motivation (Lotte et al 2013). To tackle this challenge, transfer learning techniques are needed to use data and pre-trained models from other days, other users, or even other devices to facilitate the “re-calibration” of your model (Chiang et al 2021). Adaptive learning is also being used to automatically, and continuously adapt the model to the user.
Applications
BNCI Horizon 2020 (Brunner et al 2015), one of the highly influential BCI projects, presented five categories of BCI use cases: replace, restore, improve, enhance, and research. Here we propose a mental framework to understand the utility of BCI applications and to inspire fellow and future BCI pioneers to build upon.
BCI applications started off from “replacing body ability” (bottom left in the Utility Map) for paralyzed patients. With advancements in invasive technology, particularly implants and decoding capability, we will see clinical and medical-use BCI achieve higher accuracy, reliability, and degree of freedom of control for more diverse populations with disabilities. This includes finer movement control and speech decoding that would significantly increase the communication bandwidth and speed.
BCI then expands toward “restoring brain ability” (upper left in the Map of BCI Applications) for patients with neurological disorders by coupling the brain wave to provide real-time, closed-loop stimulation for therapeutic interventions. BCI coupled with electrical or magnetic stimulation techniques, either non-invasively (TMS, tACS, tDCS, taVNS) or invasively (DBS, VNS or FES), have already been used or entered into clinical trials for treating a variety of neurological or mental disorders. Auditory stimulation in sync with slow-wave brain activities in sleep may improve sleep and potentially prevent cognitive decline (Zeller et al 2023). Visual feedback (e.g. play/stop a video clip) can be provided to incentivize users to achieve an ideal brain state for improving cognitive functions in people with ADHD or Schizophrenia (Singh et al 2020).
Now we are seeing the trend in development and applications toward improving current abilities and unlocking new abilities (bottom and upper right in the Utility Map). Passive-BCI applications (Zander and Kothe, 2011) have been used in day-to-day settings for healthy populations in scenarios like learning in a classroom, working in the office or at home, driving a car, navigating an airplane, and playing and integrating in virtual-reality games. We will explore this topic further in Level 5.
To sum up, with increasingly comfortable and miniaturized headwear for brain-sensing, the accumulation of data and labels, breakthroughs in artificial intelligence systems and computing capability for decoding, we will see BCI technology flourish and provide a better quality of life for us.
Level 5: Expert
What is the trend in BCI technology in the next 5-10 years and beyond? What are some grand challenges in BCI that may require breakthroughs from other fields? In this section, we invite you, as an expert in your own field, to collaborate and help unlock the future of BCI.
Scientific Understanding of the Brain
“Complete understanding of the human brain” is the grand challenge. It is so grand that almost certainly we will not arrive at a satisfactory result in this lifetime, one ought to walk humbly when approaching this mountain with its peak disappearing into the clouds. Like an ant learning the latest mathematical model of how ant societies work, our understanding may ultimately be limited by our vocabulary no matter how fluent we become.
The goal can be pursued incrementally and eventually the mountain of knowledge needed might be scaled. We will know we've arrived when all neurological diseases, neurodegenerative disorders, mental illness and anything negative related to the brain does not exist anymore– since if we have complete understanding of it, it will mean we have complete ability to modify it. Before getting lost out on a philosophical or bioethics branch here lets opine back on the side of neuroscience and the grand challenge of the field: understanding the brain to the best of our abilities.
The phenomena observed and documented by neuroscientific instruments seem only scratching the surface. Particularly, three research areas might have a direct impact on the future of BCI:
(1) understanding the mechanism that gives rise to the complex and dynamic cognitive functions and mental states in humans. What are the neural mechanisms and cognitive processes that embody our subjective experience of feelings, thoughts, or other mental states? Without the bridges between neuroscience, cognitive science, and psychology, we won’t be able to quantitatively measure the various mental states. How many emotions do we have after all? To measure them all we need to know.
(2) advancement in tools to record and stimulate the brain with high spatial and temporal resolution - this will not only advance our scientific understanding of the brain but also provide BCI developers with the right tools to decode and modulate the desired brain functions. New tools lead to new rules, as the saying goes. And circularly, new rules to new tools.
(3) using BCIs as a research tool itself, serving as a synthetic nervous system, that helps advance our understanding of neuronal, functional, and psychological changes in response to real-time, dynamical, adaptive feedback. Such understanding can create wondrous values, for example, to treat neurological or psychiatric disorders, find personalized methods to optimize our learning, promote our brain and mental health, and effectively co-evolve with the rapidly advancing artificial intelligent personal agents.
BCI Software Platform: from Algorithms to Mechanisms
Naturalistic interactions with computer applications is one primary goal of BCIs. Many of the most popular BCI paradigms focused on decoding algorithms and artificially connected the outputs for other forms of control as a proof-of-concept demonstration. These often are not natural for users and the interactions they create with applications can be unintuitive: flickering lights or flashing bars on a screen lead to a word being selected, or imagining opening or closing a hand steers a wheelchair. Many of these paradigms overlook the human in the loop and principles of usability that other personal computing areas.
The connection between decoding algorithms and forms of interactive feedback or control, which together we refer to as the “Mechanism”, requires thoughtful and user-centered design. For a BCI to be useful and used on a daily basis, it needs to be comfortable to the extent of becoming unnoticeable. Not only physically, but psychologically as well in terms of the methods it employs to interact with the user. Here we'll offer a few unique BCI control and interaction mechanisms.
Actions mechanism can be produced by a user actively doing something and expecting fast and accurate outcome, like silent speech for communication or imagery movement for object control. Intent mechanism is a more subtle and natural form of control where a user wants something to happen by naturally changing what they focus on, prefer over, or dislike. States mechanism does not require any mental efforts and is a more subliminal, passive measure that reflects user emotions and general mental conditions. Responses mechanism also does not require active control but measures a user’s natural mental reaction to external stimuli or feedback, like seeing and correcting a mistake.
Traditional human-computer interfaces only provide Actions mechanisms, such as pressing an icon with one's finger to open an app. It requires active effort to move the finger and press, and one would expect the app to open instantly every time it is pressed. Traditional BCI has followed this broader computing trend and pursued Action mechanisms. However, BCI has the unique ability to unlock other types of control and interaction mechanisms like Intent, States, and Responses in addition. These mechanisms require less mental efforts from users, and because of the more implicit control mechanisms, users do not have the same high expectation on performance compared to Actions. A successful BCI will in this way expand the command repertoire to provide various mechanisms that are intuitive to users, similar to how touchscreens and computer mice were instantly familiar and easy to start using by most people right away when they came out.
The ultimate solution likely involves a suite of mechanisms that inter-relate and interact with each other while driving external actions. Much like the brain itself is arranged, the aspects of cognition that connect to applications through a BCI likely need to be hierarchically arranged and segmented to work best.
What are the Killer Applications of BCI?
1. New Communications
Full conversational speech and full control over a computer are the goals for personal electronics generally when it comes to communications. Hands-free, eyes-free, voice-free, gesture-free commands that will work for patients and physically-able people alike. Pure intention and intuitive interactions easily working every time.
In the first 50 years of BCI research, most applications focused on active control within medical, clinical, and research ecosystems to allow people who are paralyzed a new way to communicate or control a device that increases their independence. The noble goals and efforts in this arena have been mostly for helping patient populations where 80% control over a cursor to someone who is locked-in is immediately life changing. Continuing along this line of development, it is a feasible goal of the field to free every single locked-in patient and that, we believe, is achievable in the next decade with noninvasive methods. If a person has functional brain activity, they should be able to communicate basic commands at the least. This is a humanitarian objective of the field and needs to be achieved. Making locked-in a thing of the past remains a powerful driver for many in the field. For some it is the only litmus test worth evaluating progress by.
From another perspective though, for most people, a cursor that only works 80% of the time is simply not good enough and the dominant BCI paradigms of the existing research feel strained and uncomfortable to use since they were designed for the most desperate of cases. In the past decade, a new ecosystem has been emerging towards a more natural use of BCIs where there is no need for active control. This demand on the user is lifted and interaction is more seamless. Easier connection paradigms makes BCI more accessible to average consumers. While implanted BCI systems might be needed to communicated at conversational speeds, there has been success in noninvasive systems that is accelerating, with adjacent advances in generative AI helping to power the progress.
2. Skills Learning
There are already killer applications within learning, mostly tied to how fast the same skills can be taught with a BCI and without. For example, when you are learning in an online course, an add-on BCI can accelerate your learning by sensing that you are bored and giving you harder materials; or sense that you are overwhelmed and slow down with more examples; tracking to optimal performance for people individually according to Yerkes-Dodson-like performance curves (Yerkes, Dodson, 1908).
A driving trainer with add-on BCI can sense you feel stressed and modify its driving behavior to help you feel comfortable and in a more extreme example, an airline pilot can train in difficult and dangerous conditions that simulate their state of mind more effectively. Or imagine having a personal athletic trainer who can give you prompt notice when you seem to be distracted - to regain control of your attention at a key moment of an exercise. The keyword here is “action” - BCIs can take actions and turn insights from the brain into timely feedback that can effectively improve users’ ability to learn material.
3. Adaptive Experiences
The third killer application is a personalized, adaptive experience directed by BCI. In gaming this is most immediate: a virtual hand with fingers you can control (Furman et al 2016) and use as easily as your own, an entire virtual body, an emotionally connected avatar. Gaming content, audio content, video content, all might be able to adaptively be tuned to individual users to improve their experience.
For example, audio content can be played at variable speeds for podcasts adapted to the user in the moment, and playlists can be customized to achieve an effect like increasing user focus (Haruvi et al 2022). Arctop technology has already shown some success in this area, being used to continuously measure a person's focus level while they listen to audio content. This enables adaptive playlists that reliably increase and sustain attention which is helpful for studying, working, exercising, and many other essential human tasks that rely on focus.
In today’s attention economy, information streams are increasingly rapid and bite-sized, with distractions abounding. Technologies that help maximize and maintain focus states can be invaluable. Using BCI technology, applications could adjust experiences to theoretically achieve any brain state aside from focus though, and that is perhaps the most 'killer' part of this application. By anchoring applications behaviors to objective measures of user cognition, a system of instant feedback becomes available that has profound implications of what is achievable for a user in concert with their BCI.
Bioethics, Privacy & Security
Concerns about data ownership, data access rights, and data privacy abound in BCI. Data may be processed on devices (edge computing), locally, with user controlled data access rules and only sparse features (SDK) being sent out encrypted to the Internet and Cloud or local network. Or it could be processed like it's the cyber wild west with unrestricted data access rules over real-time and historic data.
Markets, leading companies and government regulation will likely be the greatest determiners of how these issues evolve. Across the world it will be interesting to see if there is consensus, or if divergent attitudes shape BCI technology in different directions. For now all we can do is build to the best principles possible, and design for privacy and the world we want to see.
BCI data is sensitive by nature since it contains information about a person’s identity (Kopito et al 2021), health and their real-time mental status so it needs to be treated accordingly. Through the cognitive and affective state data BCIs process, much of what people consider the most fundamental parts of themselves are exposed and because of that, to say the least, the field is ripe with ethical, security, and privacy issues that invite contributions from experts.
Open questions in the field include how brain data is measured. For instance does the device clearly label that brain sensors are embedded? People at minimum should know if their brain data is being processed it seems. If they consent for it to be, the next layer is how is it used by applications? At Arctop we take a user privacy-centered approach, but this is not yet the prevailing model as increasingly cloud-based approaches are used for business models based on data being shared or accessed by affiliates. Some companies ask users to 'donate' their data, some are less straightforward. The dramatic differences such technology and data architecture decisions have on the end user's mental privacy and personal rights writ large cannot be overemphasized.
BCI as a class of technology thus raises unique, complex issues. These involve human agency questions as well, since actions that BCIs perform must be treated as an extension of the person's own. Unless of course, the BCI made a mistake in decoding what the user wanted. For example if a prosthetic arm punches someone, who is to blame if the user denies that they made the BCI punch intentionally? At what point does a crime become a crime is another corner one may end up in doing work on the ethics in this space – does law enforcement need brain data to prevent drunk driving or a crime of passion? The reader can go ahead and fill in the blanks here in terms of other sci-fi tropes and subplots.
There is also a significant moral and societal question of BCI "haves" and "have-nots" and who is entitled to access the technology and for what purpose, since access to BCI technology could profoundly differentiate populations from one another and create positive and negative feedback loops across cultures if one has access to technology that accelerates learning and improves health that the other does not. (Bavelier et al 2019).
Toward a General BCI: Interface, Interaction, Intelligence
An interesting model is proposed by Gao et al 2021: that generalized BCI technology would evolve through three stages: interface, interaction, and intelligence. The field of BCI has transformed from “interfacing”, one-way brain-to-computer control, to “interaction”, two-way co-adaptation for both human brain and computer. This transition highlights the importance of the “Write” or “Encoding" path, compared to the conventional “Read” or “Decoding” direction. The “Write” path could use interactive auditory or visual feedback for adaptive learning experience in cognitive augmentation or skill learning, or it could use direct electrical stimulation as neuromodulation therapies or neurorehabilitation, as described in Level 4 Application. Since learning is a master tool we believe that the killer applications of read and write technology will be in the area of teaching ourselves new things not just faster, but more memorably and effectively.
The era of super-human AI “agents” and as they become the new “computer”, the final stage of generalized BCI technology will be “collaborative intelligence” – a seamless integration and collaboration of human intelligence (HI) and artificial intelligence (AI) where learning is not needed in many instances since the partnership is coupled so tightly. HI is better at understanding, reasoning, generalizing, empathizing, and goal-setting, while AI is better at perceiving, memorizing, computing, interpreting, and achieving specific tasks. AI can couple a user's goal-directed intents (e.g. attention, preference) as reward functions in its reinforcement learning to align goals. AI can integrate a user’s states (e.g. emotions, stress) to jointly make personalized, situational decisions in perfect harmony with goals aligned.
The Why: Building A General BCI
A General BCI is one that works for any human immediately out of the box with perfect decoding of all the elements and dimensions of a person's Cognition. It is a system that has mapped the human experiential space and has a unified informational model of how human's feel and understand things - a symbiotic relationship where the BCI does everything it can to maximize the user's quality of life. At scale, General BCI systems in use by a population of people through the same approach should be able to maximize the quality of life of the community as a whole.
By making quality of life a quantified, data driven metric to be optimized by a General BCI the way can be charted towards improvement for anyone. Outgrowing certain human limitations and suffering with improved emotional regulation, reversing cognitive decline, maintaining memory and healthy forgetting, downloading skills into your brain to speak a new language. All are in the province of a General BCI, which by necessity requires leaps forward in imagination and understanding of the brain, human experience, and the environment in which both operate.
In the recent book "The End of Reality," writer Jonathan Taplin raises the provocative point of the philosophy of the Greek Epicurus, which he ascribes to being the opposite of what 'Big Tech' aims for today in that for Epicurus the three elements of a happy life were unaddressable with modern technologies:
- The company of good friends
- The freedom and autonomy to enjoy meaningful work
- An 'examined life'
We would argue to the contrary that these Epicurean values are actually best addressed with a BCI – not other tech or by 'Big Tech' per se, we agree, but specifically BCI because of how it works – yet another example of how this technology is in a unique class. Perhaps instead of new communications, skills learning and adaptive experiences, those three elements above should be the anchors for BCI.
Company of good friends can be supported well by an empathic application powered by BCI for match making and scheduling and reviewing the goodness of the friend fit with objective data plus the patience and organization skill to maintain and cultivate friendships over time. Freedom and autonomy to work is about communication, control, about not being stopped from pursuing purposeful work regardless of where one is or who they are. And an 'examined life' is another word we feel for quantified self- the tech trend for which BCI is a type of apotheosis, since it allows the most high granularity examination of one's body and brain. So it may be that with BCI applied right, even the greatest Luddites among us may be won over.
To create the ultimate General BCI that helps humans everywhere live a higher quality of life, at Arctop we are focusing on the software platform for decoding and building it in a hardware-agnostic way to be as universal in connective applications as possible. Quality of life for humans everywhere is our north star and to enable as many people to enjoy a high quality of life we believe the brain, and BCI technology, are the master tools. We can't do it alone since so much is required in the technological and societal stack to bring BCI out widely. Together we are at a unique time in human history with BCI, AI, and VR technologies knitted together by an ever more powerful compute that is near miraculously bringing us all into the General BCI era. With Arctop software and developer tools we aim to accelerate that arrival.
Sometimes it's worth taking a moment to marvel at how far humans have come. Here at the end of 2023, the end of this post, we are grateful you're here reading and we are reflective on the distance run. But mostly we are looking ahead — excited to release a new product early in the new year that takes a leap forward to that goal of quality of life for all. If you want a sneak peak at our baby General BCI, reach out to us and we'll do our best to connect!
References
- Gao, X., Wang, Y., Chen, X. and Gao, S., 2021. Interface, interaction, and intelligence in generalized brain–computer interfaces. Trends in cognitive sciences, 25(8), pp.671-684.
- Brunner, C., Birbaumer, N., Blankertz, B., Guger, C., Kübler, A., Mattia, D., Millán, J.D.R., Miralles, F., Nijholt, A., Opisso, E. and Ramsey, N., 2015. BNCI Horizon 2020: towards a roadmap for the BCI community. Brain-computer interfaces, 2(1), pp.1-10.
- Chang, C.Y., Hsu, S.H., Pion-Tonachini, L. and Jung, T.P., 2019. Evaluation of artifact subspace reconstruction for automatic artifact components removal in multi-channel EEG recordings. IEEE Transactions on Biomedical Engineering, 67(4), pp.1114-1121.
- Hsu, S.H., Pion-Tonachini, L., Palmer, J., Miyakoshi, M., Makeig, S. and Jung, T.P., 2018. Modeling brain dynamic state changes with adaptive mixture independent component analysis. NeuroImage, 183, pp.47-61.
- Hsu, S.H., Lin, Y., Onton, J., Jung, T.P. and Makeig, S., 2022. Unsupervised learning of brain state dynamics during emotion imagination using high-density EEG. NeuroImage, 249, p.118873.
- Chiang, K.J., Wei, C.S., Nakanishi, M. and Jung, T.P., 2021. Boosting template-based SSVEP decoding by cross-domain transfer learning. Journal of Neural Engineering, 18(1), p.016002.
- Singh, F., Shu, I.W., Hsu, S.H., Link, P., Pineda, J.A. and Granholm, E., 2020. Modulation of frontal gamma oscillations improves working memory in schizophrenia. NeuroImage: Clinical, 27, p.102339.
- Zeller, C.J., Züst, M.A., Wunderlin, M., Nissen, C. and Klöppel, S., 2023. The promise of portable remote auditory stimulation tools to enhance slow‐wave sleep and prevent cognitive decline. Journal of sleep research, p.e13818.
- Zander, T.O. and Kothe, C., 2011. Towards passive brain–computer interfaces: applying brain–computer interface technology to human–machine systems in general. Journal of neural engineering, 8(2), p.025005.
- Kopito, R., Haruvi, A., Brande-Eilat, N., Kalev, S., Kay, E. and Furman, D. 2021. Brain-based Authentication: Towards A Scalable, Commercial Grade Solution Using Noninvasive Brain Signals. bioRxiv, 2021.04. 09.439244
- Haruvi, A., Kopito, R., Brande-Eilat, N., Kalev, S., Kay, E. and Furman, D. 2022. Measuring and modeling the effect of audio on human focus in everyday environments using brain-computer interface technology. Frontiers in Computational Neuroscience 15, 760561
- Furman, D., Reichart, R. and Pratt, H. 2016. Finger flexion imagery: EEG classification through physiologically-inspired feature extraction and hierarchical voting 4th International Winter Conference on Brain-Computer Interface (BCI), 1-4
- Furman, D., Benisty, H., Abramovich, T., Ivry, A., Pratt, H. 2016. Enhancement of BCI classifiers through domain adaptation. IEEE International Conference on the Science of Electrical Engineering.
- Furman, D. Computers Will Soon Read Your Mind: Technology will help patients suffering from ALS or strokes. 2023. The Wall Street Journal.
- Furman, D., Kwalwasser, E., 2023. Interactive electronic content delivery in coordination with rapid decoding of brain activity.
- Furman, D., Kwalwasser, E., 2021. Empathic Computing System and Methods for Improved Human Interactions With Digital Content Experiences.
- Lotte, F., Larrue, F., M̈uhl, C. 2013. Flaws in current human training protocols for spontaneous Brain-Computer Interfaces: lessons learned from instructional design. Frontiers in Human Neurosciences, Frontiers, 7 (568)
- Bavelier, D., Savulescu, J., Fried, L., Friedmann, T, Lathan, C., Schürle, S., Beard, J,. Rethinking Human Enhancement as Collective Welfarism. Nat Hum Behav. 2019 Mar; 3(3): 204–206. Published online 2019 Feb 11. doi: 10.1038/s41562-019-0545-2
- Yerkes RM, Dodson JD (1908). "The relation of strength of stimulus to rapidity of habit-formation". Journal of Comparative Neurology and Psychology. 18 (5): 459–482. doi:10.1002/cne.920180503.
- Taplin, J. 2023. The End of Reality: How Four Billionaires are Selling a Fantasy Future of the Metaverse, Mars, and Crypto. Publisher: PublicAffairs. ISBN: 9781541703155