Scenario FIVE

Sensorium (Internet of Emotion)

This is a world in which highfidelity, ubiquitous sensors and advanced data analytics make it possible to gain deep insight into human emotional experiences, a kind of insight that until roughly 2020 will be extremely difficult for humans to assess at scale.

More familiar types of data that in 2016 are expected to make a big difference—granular traffic data or data gathered from smart homes—turn out to be mildly interesting but not transformative. The greatest gains, commercial and otherwise, will instead be made through technology that measures how people feel: how mind states and memories are called on and experienced, and where love, hate, jealousy, ambition, mastery, competitiveness, and other basic human emotional states are invoked. Biosensing, found at the intersection between physical indicators and brainwave measures, will become the biggest growth area on the internet. In this world, cybersecurity and emotional security will become inextricably intertwined. Cybercriminals, corporations, and governments will not only take advantage of tracking human emotion but also begin to subtly manipulate those emotions for licit and illicit gain.

The World

This scenario portrays a world of 2020 in which emotional sensing becomes a central—and possibly the central—feature of internet technologies. The precursors to this world are already in place in 2016. Consider the “Quantified Self” movement, a hobbyist trend toward using technology to measure unexpected aspects of daily life.1 In this world, the movement will lose its name by 2018 because its practices will become mainstream. Just as smart phones became standard possessions over the course of a few years, bio sensing devices will become ubiquitous as the price of sensors that are deployed on and around human bodies falls further.

“Personal metrics”2 already allow for tracking empirical behavioral patterns. In this world, these metrics will be monetized for commercial products, help achieve personal goals (like fitness), and enable productivity “hacks”3 for daily life. As these devices become more accurate and the effects more widespread, it will become common in major cities to see people wearing three, four, or perhaps 10 personal metric devices. Implantable devices will be the new horizon for hobbyists, and these too will become mainstream in a short timeframe (though perhaps not by 2020).

Much of this technology—in its first iterations—will make relatively little difference. Step counts and real-time heart-rate data turn out to be mostly curiosities, instructive for improving health (at least in theory), but with limited value to others. Reminders and records of time spent sitting, standing, or talking prove to be clever conversation starters but not much more. For all the money, effort, and attention that will be spent trying to build truly useful products and services on top of these devices and their data streams, success will continue to be elusive. Most wearable devices will end up in someone’s drawer after a couple of weeks—for now.

The real turning point will occur when the market for sensors shifts to include not just personal wearables and data trackers, but an extensive array of remote sensors that capture data about interactions between significant numbers of people. So-called sentiment analysis already allows firms to detect shifts in public opinion based on reactions to events online.4 When that data can be combined with, for example, heart-rate variability data and extensive external information about what is happening to the people whose heart rate is being tracked at that moment, the value of measuring interactions will explode. When body temperature, brainwave activity,5 eye-tracking and pupil dilation, perspiration, endocrine and glucose levels, endorphin highs, and other variables can be measured through portable devices and among groups of people who are interacting in a particular environment, it will become plausible to understand interpersonal dynamics better than ever before. As sensors get better and smaller, these recording devices likely will not be visible to the naked eye; imagine brain sensors on the earpiece of the latest Kate Spade glasses, or contact lenses that can measure not only glucose but also other biomarkers in eye fluid.

The real turning point will occur when the market for sensors shifts to include...remote sensors that capture data about interactions between significant numbers of people.

This will mark the rapid launch of a new research field, combining aspects of clinical psychology and computer science and focused on individual “affects,” or surface impressions of an individual’s mental state. Think of today’s efforts to use facial cues to measure emotion but scaled up, occurring in real time, and made extremely precise. The promise of this field will generate a second round of interest and investment in personal metrics. Doctors will use these capabilities for the long-term health monitoring of patients on a much broader platform; companies will use them to study productivity and performance patterns of employees and teams; marketers will use them to reach a new level of customized advertising and product placement; school systems will use them to help identify deeper sources of learning patterns and behaviors in students; and communities will use them to understand what is actually happening and what citizens really care about.

What will enable these kinds of developments? Progress on these dimensions will be a function of knowing not just what people do and say but also how they feel at each moment. Data about emotional states will be the key that unlocks the latent value of personal and professional data already being collected in 2016. In other words, analysis at the intersection of internal (personal) and external (environmental) outcomes will reveal extraordinary details about how people respond to one another and to stimuli in their environment. Researchers will be able to measure and record the landscape of human emotion— its conditions, triggers, and effects. Interest in aggregated insights—the “emotional internet”— will begin to supplant interest in individual affect as analysis becomes more sophisticated. Surface impressions of people’s emotions will no longer be interesting, because the underlying emotions themselves can be measured precisely, at scale, and with very high accuracy.

Consumers initially will be wary of the incredible intimacy this new stream of activity seems to convey. Their ambivalence will be tested repeatedly and sometimes unintentionally. For instance, Fitbit and Jawbone might together release a “mood armband” that, despite enormous media and scientific attention, surprises with its slow uptake in the market. Consumers will wonder whether this device is, on the one hand, actually able to do what it claims, or perhaps, on the other hand, able to do more than it claims: could it allow the firms behind the device to learn more about our emotions than we ourselves know? That ambivalence will be mixed with skepticism about whether emotional tracking is anything more than a gimmick—or even a farce.

This mix of fear and skepticism will linger until companies decisively prove the value of this new technology. Facebook might release, with great fanfare, a “Mood History” product that periodically reminds users of their mood on any particular day up to two years ago. The program would be accompanied by a premium offering that claims to be able to predict mood on days going forward, and suggests behaviors that individuals can employ to make themselves feel more settled, calm, and even happier over the course of a day. The idea would seem so audacious that no one would take it seriously—until they try it and find out that it works.6

Such proof will signal a tipping point in the marketplace. A new horizon of devices and applications will be developed, focusing on what can be done with reliable measures of emotional insight at scale. Some of the use-cases will be almost mundane: it will be much easier to know if your date is having a good time, or if party guests are enjoying themselves as much as they want you to think they are. Some will be fascinating: how do your employees really feel about working for you? How deep is the loyalty of Chicago Cubs fans to a team that hasn’t won a World Series for decades? Some will be deeply personal: does my spouse really like the gift I gave her? And some will prove incredibly useful in day-today interactions where emotional states are hard, but extremely valuable, to communicate. Imagine replacing the 10-point scale for pain with the ability to convey to a physician precisely how much an injury or disease hurts, frightens, or troubles you.

From the seemingly trivial to the most serious, information about fundamental aspects of emotional experience will become newly accessible. For many, it may feel less like a revolutionary development than the next incremental technological advancement. The irony is that the technology will be able to gauge exactly that dimension of response to itself. Might this be the new frontier in machine-learning—a system that can self adjust to stay on the “comfortable” side of the human response equation to maximize adoption of itself?

A new horizon of devices and applications will be developed, focusing on what can be done with reliable measures of emotional insight at scale.

Outcomes

The ability to use physiological and sensor data to accurately gauge human emotion will still seem a novel and preliminary capability in 2020, and the extent to which this data can be used to make deeper, long-term causal inferences about behavior will be a source of debate among experts. But for many practical applications, the technology will outpace expectations and yield a stream of surprises. The first stage of adoption will see a wide variety of new uses for broad but fairly shallow emotional sensing across myriad sectors. Governments will respond by seeking to regulate the extreme cases without slowing innovation (a familiar trope). Cybersecurity tensions will run high in this world, as illicit actors and their opponents experiment boldly with what they can do to predictably and controllably influence human emotion.

Uses of Emotional Sensing

The promise of new emotional sensing technologies will inspire a wide variety of initiatives to improve both lives and profits. The icon of this world might be the app for “emotionally verified emojis,” released by Apple in 2020 as the primary feature of its newest mobile operating system. But this world would be about much more than just emojis that tell the truth.

In the healthcare industry, psychologists could seek to access a historical record of emotional incidents to create a “digital emotional memory.” Such a record could allow health professionals to more accurately explain the circumstances that lead individuals toward mental states like depression, and, by tailoring care to those needs, could vastly improve the mental health of the population. Imagine the improved life experience (and economic productivity gains) of an American population with rates of depression reduced by even 10 percent.7 On the less positive side, for some people the constant recording and reporting of emotion will create a self-fulfilling prophecy, as negative feelings and mind-states are reinforced—though it likely will not be possible for some time to separate individuals who benefit from those who are harmed. It may also become possible to foresee forms of addiction within emotionally quantified lives, including new levels of dependency on the aid and stimulation of neurochemical reward pathways. People may grow increasingly dependent on endorphin highs, whether they come from over-exercising, bullying, or shooting weapons.

Realms in which individual performance is held to extraordinary standards will have early and high-intensity exposure to new emotion-sensitive technologies. Professional athletes will seek out new monitoring programs to achieve peak confidence and emotional energy at game time. The military will press the boundaries of similar programs for use in combat. CEOs and political figures will give up their life coaches in favor of emotion-sensing advisement. There will be significant incentives to impose new regulatory regimes in these areas, as those with access to the best technology (or the guts to try it out) may develop meaningful advantages in many domains. Would NFL owners in 2020 argue about whether to ban some of these technologies as “performance enhancing” in the same way the NFL banned steroids and human growth hormone years earlier?

At less intense levels of deployment and usage, the emotional internet will bring on new needs for individuals to manage their emotional public image, which will become part of basic social maintenance, given employer and social interest. Individuals will “groom” themselves to produce positive physiological signals that display how calm, happy, and adventurous they are throughout the day. A new profession—the mood coach—might arise to offer services aimed at helping individuals keep their measurements within a desirable range.

The destabilizing effect these emotional tools may have on interpersonal relationships will be a source of much fascination, though it will be difficult for researchers to determine how much of the effect comes from emotional manipulation alone, compared to the broader shift toward digital communication as a whole (e.g., using social media for relationships or texting as a primary form of communication). Thus, despite all of the suspicions and media attention on what could go wrong, those who want to “turn back the clock” on the emotional internet will struggle to create a unified narrative (at least in the West). Overall, people will feel that the social benefit and utility provided by this data outweighs the potential risks, and the apparent economic advantages will continue to drive ambitious research and development.

By 2020, a person’s “memories” of events or periods in their life will be to a surprising extent verifiable by their own data record—not just the facts, but also the tonal quality of the emotional experience that took place. Many aspects of these records will be available not only to the users themselves but also to other sensor systems operated by companies, governments, and other individuals with whom a person had close contact. The records will be attractive targets to attack—to steal, manipulate, or hold hostage.

At less intense levels of deployment and usage, the emotional internet will bring on new needs for individuals to manage their emotional public image.

Private-sector organizations will push the field forward at scale. Measurable improvements in decision-making and team performance will be sought and sometimes achieved, though the precise causal links between emotional states and “quality” decisions will remain tricky to establish. Markets will start to value at a premium firms that make these technologies or use them in leading-edge ways. Can corporations limit interactions between employees based on analysis suggesting their personality types are incompatible? Can someone be fired as an at-will employee based on emotional analyses? The public sector would not be far behind. Imagine in 2020 a leading US politician announcing that when she has to make hard decisions, she calls her behavioral psychologist for advice, rather than her best friend or priest.

The boundaries between licit and illicit transactions will become blurry in this world. Does a firm that wants to hire analysts of a certain Myers- Briggs Type Indicator8 for a particular team cross the line when it buys access to a proprietary algorithm that pulls out candidates with that indicator from a consolidated database, rather than taking the (much less accurate and scalable) approach of simply administering the Myers-Briggs test to potential hires? Competition among dating services will push toward what some will see as unsavory and illicit practices— for example, when preferences around emotional control and manipulation in relationships become reliable and priceable product features. It might be nice to know “for certain” that your dream date will be interested in emotional attachment that evening. But what if that product turns out to give the wrong signal even one time out of 50?

In a world where emotional sensing is commonplace, so too are the opportunities for intentional manipulation, both of the sensing systems and of raw emotions themselves. Vulnerabilities will come in many shapes and forms: emotional manipulations that human beings have always tried to impose on one another will become more systematic, targeted, and effective, and so will emotional countermeasures. As this arms race ratchets upward, we may start to see evidence of an “overclocking” of affective systems that occurs as emotions become separated from and imbalanced within the larger human cognitive and physical systems. Put differently, these kinds of emotional capabilities could easily outpace the evolutionary ability of humans to manage them in concert with other mental and physical systems. If all decision making is a combination of cognitive and emotional processing,9 what happens when one of those two components suddenly starts moving much faster than the other?

Such a rapid (in evolutionary terms) reconfiguration of what a critical part of the human mind can do will present a vast attack surface for deception and manipulation, creating an entirely new “field” of emotional crime. It is one thing to commit identify fraud and steal money or property from a person; it is another thing to subtly manipulate an emotional state so that the victim “voluntarily” hands over money or property to a criminal because she feels she really wants to “contribute” to a “cause”—or to confuse or disorient the victim in deep emotional ways, leading to the same result. The ability to carry out these kinds of manipulations against multiple individuals simultaneously with targeted interventions cannot be explained away as a better form of advertising or propaganda; this will be something of a different kind.

The integrity of emotional data will also be in play in a different way—through rewriting history. In 2019, reports might emerge of high-profile individuals faking their own data profiles and retrospectively altering their emotional histories through database hacks at large sensor companies. Could a future presidential candidate be accused by his competitors of falsifying his emotional history to cover up prejudice and malice toward particular groups of individuals? Coupled with some random (or perhaps systematic—who can know for certain?) sensor error, it will become increasingly difficult for individuals to prove that their emotional records are truthful—not only to others, but also to themselves in some instances. Garmin might be taken down in a weeklong distributed denial of service (DDOS) attack by the “Anti-Hysterics,” a group known for their public protest of large-scale emotional analysis. The public response might be muted as people try to figure out who the good guys really are. Will a new market for sensor-blocking technologies emerge to enable individuals to “opt out” of sensor arrays designed to compute their emotional states?

Legal and Policy Regimes

It is difficult to imagine that contemporary beliefs and practices around privacy would survive the transition toward the emotional internet. More likely, privacy arguments from earlier in the decade will come to be seen as quaint, because what will be at stake in 2020 are some of the most fundamental questions about what is public and what is private, what is intimate and what is not. The boundaries between legitimate and illegitimate action will now have to be negotiated at an entirely new level. Some observers will argue that emotion is already exposed in the public realm during normal human interactions and thus cannot be privileged under any kind of “reasonable expectation of privacy” standard. Remote sensing or sensing at some distance (Is an airplane passenger overly nervous? Is a now-peaceful protestor’s anger approaching some threshold?) might not have any effect on that argument as long as the sensing takes place in settings that are generally thought of as public. The private sector will try to keep the game wide open by deploying familiar “innovation permission” arguments (that is, arguments favoring greater flexibility for those who are innovating), pointing to the range of goods and services that are improving people’s lives and encouraging regulators to stand back. If they succeed, privacy advocates might end up fighting on the margins, emphasizing the need for protections against emotional tracking in private spaces, as well as protections for particularly vulnerable populations—like children, the mentally ill, and older people suffering from dementia— whose emotional data records might need to be “clean-slated” at some appropriate moment.

At the same time, businesses (legal and otherwise) that capitalize on some of the more base or unseemly aspects of human behaviors—from pornography to fear and terror inducement—will be at the forefront of experimentation and, as is usually the case, will find ways to route around whatever boundaries are established by law and regulation. If establishing a contract requires a “meeting of the minds” between freely deciding individual parties to a negotiation, where does emotional data about the history of the parties, or their interactions in a particular case, move from efficiency-enhancing to something more insidious? Would a murder trial in 2020 allow bio sensing evidence as part of a heat-of passion defense?

Some observers will argue that emotion is already exposed in the public realm during normal human interactions and thus cannot be privileged under any kind of “reasonable expectation of privacy” standard.

Labor unions might find new life as a bulwark against some of the more egregious uses of sensing data in both blue- and white-collar workplaces.

The lack of any overarching theory about emotional data makes it more likely that regulation in the United States will evolve in the same stove-piped and segmented way that “normal” information privacy laws have developed. Health uses of biosensing data will be protected to some extent under amendments to the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and health privacy policies; these might also extend to protect individuals from particular kinds of discrimination based on emotional health. Even so, there will be huge fights over the boundaries of both “emotion” and “health.” Student privacy laws might restrict the use of emotional analytics to very particular educational purposes, and also limit access and retention of these records (unless it turns out that data about emotions makes a huge difference in performance).

Employees are less likely to be protected from emotional performance evaluations or job hiring screenings, in which case the use of bio sensing devices in the workplace could become the norm. Labor unions might find new life as a bulwark against some of the more egregious uses of sensing data in both blue- and white-collar workplaces. The proliferation of emotion sensors in public spaces also would generate a significant increase in liability and harassment suits, since combined physiological and emotional data could be used to back up legal claims. This will create a seemingly inconsistent and confusing landscape of regulation that is much harder to navigate than anticipated.10

At a more local level of governance, intrusive regulatory interventions will likely emerge first to deal with situations where emotional states are associated with high-stakes and irreversible decisions that can be marked off as discrete events. Imagine a scandal where an unusual series of individuals get elected to the San Francisco Board of Supervisors after campaign materials are used to manipulate local citizens, who report feeling euphoric and optimistic regarding candidates who are heavily funded by local wearable emotion data startup companies.11 California might then mandate a cooling-off period (time and space) around election centers to allow citizens to stabilize their mood without stimuli before voting. Other states might regulate the use of emotion-manipulating campaign tactics in the media, or adaptive campaign placards that feed off data from potential voters entering their vicinity. It would not just be about voting: some states might require auto manufacturers to incorporate emotion data into speed limiters on car engines, or even ignition switch-disabling technologies that set an “anger threshold” above which you cannot start your car.

Beyond US borders, the landscape of experimentation and regulation would become far more complex. Transatlantic arguments about issues that are prominent in 2016, like “Safe Harbor” data protections and the competitive dynamics of large, US-based intermediation platforms such as Uber and Airbnb, would seem pedestrian compared to the differences that would likely emerge around emotional analytics. Might Germany simply try to ban the use of remote emotional sensing and create protected categories (e.g., students or employees) where even local or personal sensing data could not be utilized? Would the European Union demand an even more stringent set of protections?

Conversely, will some of the faster-growing emerging economies in Africa and Asia move to accelerate the deployment of an “emotionally intelligent infrastructure” as they seek to leapfrog the competition with productivity and new products and services? Autocratic regimes will certainly want access to their population’s emotion datasets for many reasons, including control and manipulation. In his advice to the Prince, Machiavelli famously said that it is better for a ruler to be feared than to be loved, so long as fear does not corrode into hatred.12 Imagine a world where ambitious autocratic rulers could calibrate these variables to precise measures of how populations respond to what they do.

Cyber-Emotional Security

In this world, the possibilities for communicating more effectively, working together, managing conflict, and assessing customer experience are hugely compelling. But so are the possibilities for manipulating emotional states, stealing and reconfiguring memories, using emotional datasets for mass mobilization toward the manipulator’s ends, and other assaults on this new and massive attack surface. This is a world in which cybersecurity becomes cyber-emotional security.

There are three key aspects to cyber emotional security: device insecurity, emotional manipulation, and the vulnerability of data. On the first point, one core attack vector will be to target devices themselves, many of which will be made by companies with limited security experience. Implantables will be particularly vulnerable, given the difficulty of removing them to fix hardware. Other sensitive targets will involve devices related to advances in medicine expected to take hold toward the end of the decade. Will hackers be able to attack digital storage devices containing individual DNA datasets, or 3D printers (and their build files) that construct the substrates for new organs?

Second, many traditional cyberattack vectors will expand in this world to involve much more effective and precise manipulation of emotions. Phishing? Attack someone with a word or phrase that is not just familiar, but particularly emotionally compelling. Social engineering thus becomes emotional engineering. Cybercriminals will also see significant benefits from attacking the new emotional sensing systems directly. Want to decrease productivity at a particular company? Manipulate team selection engines so people with incompatible traits have to work together, or worse, manage one another.

Finally, as this world develops, the value of the data being stolen will increase. While easily accessible, data from personal network devices and “quantified selves” will not be very interesting to criminals. How your mood changes at different times of the day may be harder to steal and interpret, but if done well, will be much more interesting and lucrative. It is possible to foresee a segmented market for illicit data at different points in the value chain: raw quantified self data, like raw coca leaf, will be cheap, while emotional information that can be used will be expensive, like cocaine.

Since firms will probably be the first to exploit these new data assets legally at scale, workers inside retail, advertising, entertainment, and pharmaceutical companies who try to use this emotional data for nefarious purposes will be a huge threat. Companies collecting the most robust datasets will also be vulnerable to attack.

In contrast, many governments will likely fall behind in the exploitation race; for democracies and others that care about public reactions, the “creepiness” factor of this data will be very high. Authoritarian governments will want to much more aggressively monitor the mass emotional states of citizens and test responses to stimuli—and their adversaries will want to steal that data. Surely intelligence agencies in Western countries would deeply value access to the Chinese government’s longitudinal data on Chinese citizens’ happiness and frustration.

Terrorists will be very interested in emotional data, both as an attack vector and as a way to identify the intensity of beliefs among their adherents (as well as to identify possible moles among potential recruits). It may be that the barriers to reliable interpretation are high enough that only the most sophisticated groups would go down this road, but some will surely try.

...workers inside retail, advertising, entertainment, and pharma companies who try to use this emotional data for nefarious purposes will be a huge threat.

...risks will start to manifest in services that offer manipulation of emotional states and memories...

Familiar tradeoffs around security will appear again in this new domain, potentially with much higher stakes. Facebook (or its successor) will jump on the fact that individuals will want to “send” feelings and experiences to their friends and colleagues, as well as receive the same in return. The system that measures, captures, transmits, and interprets these emotions will want to ensure the availability and integrity of that data at all levels, from the individual upward. But individuals may also look for new means of emotional confidentiality, or sentiment protection, for mind states they do not want exposed in public. This tension would likely present first as a desire for some preservation of emotional privacy, but it will be extremely hard to define these parameters in advance.

Possibly the greatest risks will start to manifest in services that offer manipulation of emotional states and memories, even if by intention for the good of the user. The question of how users of these services can know that they will receive (or have received) the “manipulation” they want and not some (possibly subtle) variation that serves someone else’s ends may be the most critical new cybersecurity question. The broader uncertainty may start to be seen as a question of whether emotions remain useful and reliable tools for understanding the people and the world around us. This will be especially true as nation-states start to see the potential to use emotional states as large scale, targetable, and reliable weapons.

It is likely then that traditional and newly formed response groups will focus on developing distinct strategies for preserving security in relation to malevolent actors, firms, government agencies, and society at large. Individuals will want not only to protect certain data from being recorded but also to confirm the truth of the data that they do release. (“Yes, honey, I really do like that dress.”) New corporations promising third-party validation of emotional data will seek to provide such confirmation.

Protecting the largest troves of emotional data also will be a priority of governments; such information may even be categorized as critical infrastructure, and thus in some cases might fall under the protection of the state.

The relationship between hackers and their targets will also shift. Hackers would almost certainly go after the emotional data of high-profile individuals to try to expose their mind states to show hypocrisy. Defense departments and private-sector cybersecurity companies, meanwhile, will expand the concept of deterrence to include the emotional states of the cybercriminals and warriors at the other end of the network, because emotional manipulation will become a key driver in preventing cyberattacks.

As ever, response efforts will mix technology with regulation, and will seek to shift social norms around what is “appropriate” behavior and action in particular environments. Governments will now have a huge new tool in the war for public opinion. Will counterinsurgency funding in places like Iran and North Korea shift into the mass emotion manipulation domain? Or could the emotional status of particular foreign leaders be targeted on an ongoing basis? The results will be mixed, not least because this is a fundamentally new playing field. Communication about emotion has always been remarkably difficult, and it will take quite some time for people to understand what some of these new capabilities and insights truly mean.

The Way Forward

This is a world in which sensors become capable of identifying and tracking emotional shifts in individuals at a large scale. In such a world, corporations that engage in and offer emotional tracking as a service will see economics benefits; politicians will explore new campaign tools; and criminals will identify vulnerabilities presented specifically by the no-longer-so-mysterious landscape of human emotions. Cybercriminals will not only take advantage of tracking human emotions but, in subtly learning to manipulate them, will create an almost entirely new playing field for defenders to manage—without a great deal of clarity, in many cases, around exactly what it is they are defending against.

In this scenario, members of the cybersecurity research community will wish that, in 2016, they had been working on:

  • Modeling
    Identifying the underlying components of emotions and how they can be modeled in the datasets produced through a broad range of sensors
  • Risks and Benefits
    Understanding the risks and benefits that the proliferation of relevant sensors may represent, including potential criminal manipulation of the sensors and data they generate and the attack surfaces on which they can be
  • Defining Security
    Defining the security characteristics of data beyond today’s domain-specific concerns, because medical, financial, and national-security data will no longer be defined by these category-specific divisions, but by the effects that such data can have on emotional states
  • Balance
    How to balance openness to innovation with various necessary regulatory protections in a realm as poorly understood as the digitization and storage of human emotion