David Monnerat

Dad. Husband. Product + AI. Generalist. Endlessly Curious.

  • The Democratization of (Everything)

    The Democratization of (Everything)

    A few years ago, I sat across the desk from a colleague, discussing their vision for a joint AI initiative. As a product manager, I pushed for clarity—what problem were we solving? What was the measurable outcome? What was the why behind this effort? His response was simple: democratization. Just giving people access. No clear purpose, no defined impact—just the assumption that making something available would automatically lead to progress. That conversation stuck with me because it highlighted a fundamental flaw in how we think about democratizing technology.

    The term “democratizing” used about technology began to gain traction in the late 20th century, particularly during the rise of personal computing and the internet.

    Democratizing technology typically means making it accessible to a broader audience, often by reducing cost, simplifying interfaces, or removing barriers to entry. The goal is to empower more people to use the technology, fostering innovation, equality, and progress.

    Personal computers would “democratize” access to computing power by putting it in the hands of individuals rather than large institutions or corporations. Similarly, the Internet would “democratize” access to information by removing the gatekeepers from publishing and content distribution.

    By the 2010s, “democratizing” became a buzzword in tech—used to describe making advanced tools like big data, AI, and machine learning accessible to more people. What was once in the hands of domain experts was now in the hands of the masses.

    Today, the term is frequently used in discussions about generative AI and other advanced technologies. These tools are marketed as democratizing creativity, coding, and problem-solving by making complex capabilities accessible to non-experts.

    The word “democratization” resonates because it aligns with broader cultural values, signaling fairness, accessibility, empowerment, and progress. The technology industry loves grand narratives, and “democratizing” sounds more revolutionary than “making more accessible.” It suggests that technology can break down barriers and create opportunities for everyone.

    However, as we’ve seen, the reality is often more complicated, and the term can sometimes obscure the challenges and inequalities that persist. Democratization often benefits those who already have the resources and knowledge while leaving others behind.

    I’ve long thought that the word “democratization” was an interesting choice when applied to technology because it resembles the ideals of operating a democratic state.1 Both rely on the idea that giving people access will automatically lead to better outcomes, fairness, and participation. However, both involve the tension between accessibility and effective use, the gap between ideals and reality, and the complexities of ensuring equitable participation. In practice, access alone is not enough; people need education, understanding, and responsible engagement for the system to function effectively.

    Democratization ≠ Access

    I’ve encountered many leaders who equate democratization with access, as if the goal is to put the tools in people’s hands. However, accessing a tool doesn’t mean people know what to do with it or how to use it effectively. For example, just because people can access AI, big data, or generative tools doesn’t mean they know how to use them properly or interpret their outputs.

    Similarly, just because people have the right to vote doesn’t mean they fully understand policies, candidates, or the consequences of their choices.

    In technology, access is meaningful only when it drives specific outcomes, such as innovation, efficiency, or solving real-world problems. In a democratic state, access to voting and participation is not an end but a means to achieve broader goals, such as equitable representation, effective governance, and societal progress.

    Without a clear purpose, access risks becoming superficial, failing to address deeper systemic issues or deliver tangible improvements. In both cases, democratization must be guided by a vision beyond mere access to ensure it creates a meaningful, lasting impact.

    Democratization requires not just opening doors but also empowering individuals with the knowledge, understanding, and skills to walk through them meaningfully. Without this foundation, the promise of democratization remains incomplete.

    Democratization ≠ Equality

    The future is already here, it’s just not evenly distributed.

    William Gibson2

    The U.S. was built on democratic ideals. However, political elites, corporate interests, and media conglomerates shape much of the discourse because political engagement is skewed toward those with resources, time, and education. Underprivileged communities face barriers to participation.

    The same is true in technology. The wealthy and well-educated benefit more from new technology, while others struggle to adopt it and are left behind. AI and big data were meant to be open and empowering, but tech giants still control them, setting rules and limitations.

    Both systems struggle with the reality that equal access does not automatically lead to equal outcomes, as power dynamics and systemic inequalities persist. Even when technology is democratized, those with more resources or expertise often benefit disproportionately, widening existing inequalities.

    Bridging the gap between access and outcomes demands more than good intentions—it requires deliberate action to dismantle barriers, redistribute power, and ensure that everyone can benefit equitably. By focusing on education, structural reforms, and inclusive practices, both technology and democratic systems can move closer to fulfilling their promises of empowerment and equality.

    Democratization ≠ Expertise

    These are dangerous times. Never have so many people had so much access to so much knowledge and yet have been so resistant to learning anything.

    Thomas M. Nichols, The Death of Expertise

    Critical thinking is essential for both the democratization of technology and the functioning of a democratic state. In technology, access to AI, big data, and digital tools means little if people cannot critically evaluate information, recognize biases, or understand the implications of their actions. Misinformation, algorithmic manipulation, and overreliance on automation can distort reality, just as propaganda and political rhetoric can mislead voters in a democracy. Similarly, for a democratic state to thrive, citizens must question policies, evaluate candidates beyond slogans, and resist emotional or misleading narratives. 

    Without critical thinking, technology can be misused, and democratic processes can be manipulated, undermining the very ideals of empowerment and representation that democratization seeks to achieve. In both realms, fostering critical thinking is not just beneficial—it’s necessary for meaningful progress and equity.

    Addressing the lack of critical thinking in technology and humanity at large requires a holistic approach that combines education, systemic reforms, and cultural change. We can build a more informed, equitable, and resilient society by empowering individuals with the skills and tools to think critically and creating systems that reward thoughtful engagement. This is not a quick fix but a long-term investment in the health of technological and democratic systems.

    Democratization ≠ Universality

    Both technology and governance often operate under the assumption that uniform solutions can meet the diverse needs of individuals and communities. This can result in a mismatch between what is offered and what is actually required, highlighting the limits of a one-size-fits-all approach.

    In technology, for example, AI tools and software may be democratized to allow everyone access, but these tools often assume a certain level of expertise or familiarity with the technology. While they may work well for some users, others may find them difficult to navigate or unable to fully harness their capabilities. A tool designed for the general public might unintentionally alienate those who need a more tailored approach, leaving them frustrated or disengaged.

    Similarly, in governance, policies are often created with the idea that they will serve all citizens equally. However, a single national policy—whether on healthcare, education, or voting rights—can fail to account for the vastly different needs and circumstances of different communities. For example, universal healthcare policies may not address the specific healthcare access issues faced by rural or low-income populations, and standardized educational curriculums may not be effective for students with different learning needs or backgrounds. When solutions are not tailored to the unique realities of diverse groups, they risk reinforcing existing inequalities and failing to deliver meaningful results.

    The challenge, then, is finding a balance between providing access and ensuring that solutions are adaptable and responsive to the needs of different communities. Democratization doesn’t guarantee universal applicability, and it’s essential to recognize that true empowerment comes not just from providing access but from ensuring that access is meaningful and relevant to everyone, regardless of their context or capabilities. Without this careful consideration, democratization can become a frustrating experience that leaves many behind, ultimately hindering progress rather than fostering it.

    Conclusion

    The democratization of technology, much like democracy itself, is harder than it sounds. Providing access to tools like AI or big data is only the first step—it doesn’t guarantee that people know how to use them effectively or equitably. Without the necessary education, critical thinking, and support, access alone can be frustrating and lead to further division rather than empowerment.

    Just as democratic governance struggles with the assumption that one-size-fits-all policies can serve diverse communities, the same happens with technology. Tools designed to be universally accessible often fail to meet the unique needs of different users, leaving many behind. Real democratization requires not just opening doors but ensuring that everyone has the resources to walk through them meaningfully.

    Democracy is challenging in both technology and governance. It’s not just about giving people access; it’s about giving them the knowledge, understanding, and opportunity to use that access in ways that truly empower them.

    Until we get this right, the promise of democratization (and democracy) remains unfulfilled.

    Footnotes

    1. The United States of America is a representative democracy (or a democratic republic). ↩︎
    2. https://quoteinvestigator.com/2012/01/24/future-has-arrived/ ↩︎
  • It’s Agentic! (Boogie Woogie, Woogie)

    It’s Agentic! (Boogie Woogie, Woogie)

    You can’t see it
    It’s electric boogie woogie, woogie
    You gotta feel it
    It’s electric boogie woogie, woogie
    Ooooh, it’s shocking
    It’s electric boogie woogie, woogie

    Marcia Griffiths

    Just as the shininess of generative AI started to lose its polish from the reality of trying to use it, a new buzzword has re-entered the lexicon: agentic.

    The term “agentic” refers to AI systems that exhibit autonomy and goal-directed behavior characteristics. These systems go beyond simply generating responses or content based on input—they act as agents capable of making decisions, taking actions, and adapting to achieve specific objectives in dynamic environments.

    The concept of autonomous agents is not new. We’re bringing back the classics. In Artificial Intelligence: A Modern Approach (1995), Peter Norvig and Stuart Russell defined an agent as anything that perceives its environment and acts upon it.

    The idea that we can automate routine tasks to free up people to do more challenging, more creative tasks is noble and has been a rallying cry of computers and software since their inception. In his essay As We May Think,” Vannevar Bush imagined a device called the “Memex” to help humans store, organize, and retrieve information efficiently, aiming to reduce mental drudgery and aid creativity.

    Computers were first used to automate repetitive, time-consuming industrial tasks, especially in manufacturing. Early pioneers recognized that this freed humans for more complex supervisory roles.

    As computers and software became more accessible, researchers explored “expert systems” that were designed to take over repetitive knowledge-based tasks to allow professionals to focus on more challenging problems.

    Today, generative AI tools like ChatGPT, Github Copilot, and others are attempting to fully realize this concept by automating tasks like writing, coding, design, and data analysis, allowing humans to concentrate on strategy, creativity, and innovation.

    But since the mainstream generative AI boom in 2022, which saw the public availability of ChatGPT and Github Copilot, and the expansion in 2023 and 2024 with more competition in generative AI, the challenges of bringing this technology into the enterprise and driving meaningful value have dampened the early enthusiasm.

    That’s not to say that generative AI hasn’t been valuable. Many enterprises report productivity gains1,2 from early use cases like code generation, knowledge management, content generation, and marketing. However, several challenges have made it more difficult to scale and adopt generative AI more broadly.

    Hallucinations, where outputs are factually incorrect, fabricated, or nonsensical, despite appearing plausible or confident, introduce risk and distrust into generative AI solutions. Toxic and harmful language, including encouragement of self-harm and suicide3,4, further expose companies and reduce their interest in exposing their customers directly to generative AI output.

    Introducing generative AI has also highlighted systemic internal issues. Knowledge management use cases were seen as straightforward, low-risk ways to leverage generative AI. For example, retrieval-augmented generation (RAG) allows users to get context-aware answers by combining AI-generated content with real-time retrieval of relevant information from sources like internal documents and databases.

    But what happens when that documentation is missing, outdated, or incorrect? What if the documentation is ambiguous or contradicts itself? What if the documentation is not in a format easily consumed by the RAG system? These generative AI solutions are not a technical solution to poor documentation and data. As the saying goes, “Garbage in. Garbage out.”

    While the challenges above are relevant to most generative AI implementations, one that applies to agentic AI and agents relate to business processes.

    Agentic AI relies on well-defined tasks, workflows, goals, and a clear understanding of how processes operate to function effectively. If processes are unclear or undocumented and data is inconsistent, incomplete, or unavailable, the AI may struggle to execute tasks properly or optimize workflows.

    Generative AI and agents are not a technical solution to inefficient processes, just as AI isn’t a solution to bad data. Automating an inefficient process with an agent could reinforce and scale those inefficiencies, creating more bottlenecks or errors.

    Companies often prefer introducing new technology rather than spending resources updating outdated documentation or optimizing processes. These challenges highlight the risks of skipping those steps, especially when agents can execute transactions automatically and interact directly with customers. The possibilities of financial and reputational damage by a wayward agent are dangerously real.

    However, the lure of automation and operational efficiency is strong, and the landscape of offerings in the agentic AI space continues to grow. In 2024, the market size was estimated to be between $5 B and $31 B5,6. By 2032, the market is projected to reach approximately $48.5 B7. Like winning the lottery, that dollar figure is causing companies to forget the struggles of implementing non-agentic generative AI in pursuit of a big payoff through automation. But what opportunities are missed to improve business and customer outcomes without agents (or even AI) while chasing that payoff?

    That’s not to say there isn’t a place for agentic AI. Similar to the gains seen from generative AI, the ecosystem of conversational, natural language, multi-model, and adaptive agents can be a powerful tool to solve complex problems and drive value. However, it will take time because work must be done before this value can be fully realized. Paraphrasing a quote, the road to generative AI (and agentic AI) is clearer than ever before, but it’s much longer than we thought.

    Recommendations

    While we travel that road to the promised land, there are a few areas companies can focus on to prepare for an agentic world:

    Invest in documentation management and data quality. If previous AI projects failed due to poor documentation or data, an AI agent will likely have the same fate as its predecessor. Companies may see incremental gains through this effort because it’s likely that poor data and documentation are creating inefficiencies. For example, poor documentation can cause support agents to struggle to find answers, causing longer handling times.

    Invest in process optimization. The simpler a process is, the more likely it can be automated. I’ve found that companies want to keep their complex processes, which humans often find challenging to navigate, and think that automating them is a faster path to efficiency gains. The reality, however, is that complex processes have a long tail of edge cases that cause automation to break down, require extensive troubleshooting and tuning, and cancel out value.

    Simplify architectures and APIs. One aspect of autonomy for agentic AI is access to tools and functions that the agent can execute to act. An agent cannot effectively utilize complex APIs that wrap multiple functions and are not well-instrumented.

    Focus on risk mitigation. As mentioned above, generative AI and agentic AI introduce risks, including hallucinations, toxic and harmful language, and a lack of oversight and controls. If the best time to plant a tree is 20 years ago, the best time to implement guardrails and controls is before introducing agents. As business processes are reviewed, optimized, and documented, attention should be paid to identifying and securing vulnerable points.

    Identify small use cases with low risk and high value. It can be tempting to throw agentic AI at the biggest or most expensive problem to maximize the return on investment. However, starting with complex, high-stakes processes increases the likelihood of errors, inefficiencies, and stakeholder resistance. Instead, focus on areas where agentic AI can deliver quick wins. This approach allows teams to refine their understanding of the technology, build trust, and develop best practices before scaling to more critical or complex use cases.

    Consider non-agentic and non-AI solutions. Of all the recommendations, this one will likely generate the most resistance as companies are pushing towards the promise of generative AI and agentic AI solving all problems. Improving customer service or reducing call volume through better internal documentation or website search won’t generate enough buzz to show up in a news feed. There is so much pressure to find problems that can be solved with generative AI, forcing a solution-first, technology-first mindset. Ultimately, it should never be about the technology. It should be about the outcomes. It should be about improving our customers’ and employees’ lives and experiences and the value we bring to the business. Start with a problem or pain point and work backward. Consider all possible solutions, and choose the one most likely to succeed, even if it doesn’t feed the hype.

    Conclusion

    While the allure of agentic AI is undeniable, achieving its promised potential requires deliberate preparation, thoughtful execution, and a focus on foundational improvements.

    Companies must resist the urge to chase the hype and prioritize efforts that enhance data quality, streamline processes, and establish robust risk mitigation strategies.

    Starting with low-risk, high-value use cases can build momentum, trust, and a clear path to scalable adoption. At the same time, leaders should remain open to non-agentic and non-AI solutions that more effectively and sustainably address pain points.

    Ultimately, the goal should not be to implement the latest technology for its own sake but to deliver meaningful outcomes that enhance customer experiences, empower employees, and drive long-term business value.

    The journey toward agentic AI may be longer and more complex than we thought, but with the right approach, we can significantly increase the likelihood of realizing its full value.

    You can’t see it. You gotta feel it. Ooooh, it’s shocking. It’s agentic.

    Footnotes

    1. https://cloud.google.com/resources/roi-of-generative-ai ↩︎
    2. https://www.wsj.com/articles/its-time-for-ai-to-start-making-money-for-businesses-can-it-b476c754 ↩︎
    3. https://gemini.google.com/share/6d141b742a13 ↩︎
    4. https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0 ↩︎
    5. https://www.emergenresearch.com/industry-report/agentic-artificial-intelligence-market ↩︎
    6. The wide range in the 2024 figure is likely due to differing methodologies or market definitions. ↩︎
    7. https://dataintelo.com/report/agentic-ai-market ↩︎
  • In Defense of One-on-Ones

    In Defense of One-on-Ones

    Earlier this week, a former colleague forwarded me a video of Airbnb CEO Brian Chesky in an interview with Fortune saying he doesn’t believe in one-on-one meetings.

    The full context might reveal nuances specific to certain managerial levels. For example, he could have been referring to CEOs having one-on-ones with the rest of the C-suite who report to them. However, most of his references were to “employees,” and most of the comments on the video seem to generalize across all one-on-ones. Based on the video and related commentary, there appears to be growing skepticism about the value of one-on-ones.

    I’ve worked for bosses who didn’t see the value in one-on-ones, and I’ve worked for bosses who would use them to drive their agenda. When managers ignore or misuse one-on-ones, employees feel undervalued, disconnected, and unsupported.

    I’ve also been fortunate to work for bosses who modeled what a one-on-one should be. When managers prioritize regular one-on-ones, employees feel heard, supported, and valued. This fosters trust, alignment, and engagement, benefiting both the employee and the organization.

    Based on my experience, getting rid of one-on-ones is a terrible idea. (However, I favor getting rid of bad one-on-ones.)

    A few comments from the video highlighted misconceptions about one-on-ones or are signals of a bad one-on-one.

    “The employee owns the agenda. And what happens is they often don’t talk about the things you want to talk about.”

    One-on-ones are more than just about the agenda — they’re about building trust, understanding what motivates your team, and catching minor issues before they become big problems. Even if the employee’s agenda doesn’t directly overlap with your immediate goals, it gives you insight into what they’re thinking and feeling, which can help you guide them more effectively.

    That said, there are ways to make the meeting productive for both sides. The beauty of one-on-ones is that they’re a two-way conversation. While employees should have space to bring up what’s important to them, the manager also has an opportunity to steer the conversation toward topics they find valuable. It doesn’t have to be one or the other — it can be a balance.

    One way to address this is by co-creating the agenda. Before each meeting, you could ask the employee to suggest a couple of items they want to discuss, and you can add one or two topics that align with what you want to address. That way, both sides feel heard, and the meeting stays focused.

    In the spirit of the statement in the video, resist the temptation to control the full agenda. Remember, the one-on-one should be about the employee…not everything needs to be about you.

    “You become their therapist” and “They’re bringing you problems but often times they’re bringing you problems that you want other people in the room to hear. In other words, there’s very few times an employee should come to you one-on-one without other people.”

    One aspect of this comment I agree with is that some conversations are more appropriate in a team setting. Employees sometimes talk about their work or the project status they should bring up with the full team, such as surfacing a new issue. The nod to Jensen Huang’s quote, “I don’t do one-on-ones because I want everyone to be part of the solution and get the wisdom” is appropriate.

    But often, employees bring these topics up because they think that’s what their manager wants to hear. After all, that’s the only thing their manager asks about in one-on-ones.

    Sometimes, an employee, especially a junior one, doesn’t know how to bring a difficult topic up to the team. As a leader, those present opportunities to coach them and help shape how to bring those items to the team in a way that supports your organization’s culture.

    Finally, if an employee is bringing you items that are genuinely more appropriate for a therapist, you, as a leader, should set those boundaries and guide them to a more appropriate forum. However, don’t dismiss these topics when they relate to workplace well-being…the employee may be asking for accommodations, not solutions.

    “If they’re concerned about something, if they’re having a difficult time in their personal life, if they want to confide in something; they don’t feel safe telling a group. But that should be infrequent.”

    As I mentioned above, if the employee brings challenges in their personal life, look for opportunities to provide accommodations, not solutions. If the employee expects more and you are not willing, capable, or permitted to engage further, guide them to appropriate resources.

    But if they don’t feel safe bringing a topic to a group, coming to you is a gift. It’s a sign that your team has a perceived lack of safety or a potentially unhealthy dynamic that needs to be addressed.

    Also, while these items should be infrequent, this is not an exhaustive list of topics appropriate for a one-on-one conversation. Career development, goals, feedback, and recognition should be regular topics, too.

    Even with the expanded list of potential topics, there’s also no requirement that one-on-ones be weekly. It’s less about the frequency and more about the regularity and building a strong, trusting relationship that empowers the employee to thrive.

    A final note on bad one-on-ones…

    One of my first managers to schedule one-on-ones (probably from a corporate directive) said, “We’re going to have one-on-ones. Send me an agenda beforehand.”

    “Send me an agenda” is ambiguous and intimidating, especially for junior employees. I had no other context, no list of suggested topics, and no idea what I was doing. I would send a status-focused agenda because that’s all I knew. I don’t think either of us got much out of those meetings.

    Eventually, I reported to a different manager who also scheduled regular one-on-ones. When I sent the agenda, my manager stopped by to apologize for assuming that I understood the purpose of the meeting. They helped me move from a status-focused agenda to one that balanced my work, career, and where I needed help. I felt seen and supported, and that showed up in my work.

    While both the employee and the manager can be responsible for bad one-on-one meetings, the balance of responsibility skews toward the manager because they hold the leadership role and set the tone for the meetings.

    As a leader, take the responsibility and focus it on what is best for your employees and organization.

    Don’t abolish one-on-ones.

    Make them better.

  • The Humanity In Artificial Intelligence

    The Humanity In Artificial Intelligence

    I wrote this essay in 2017. When I restarted the blog, I removed the posts that had already been published. But after reading this one, while the technology has advanced significantly since then, the sentiment still applies today.

    Dave, January 2025


    Algorithms, artificial intelligence, and machine learning are not new concepts. But they are finding new applications. Wherever there is data, engineers are building systems to make sense of that data. Wherever there is an opportunity for a machine to make a decision, engineers are building it. It could be for simple, low-risk decisions to free up a human to make a more complicated decision. Or it could be because there is too much data for a human to decide. Data-driven algorithms are making more decisions in many areas of our lives.

    Algorithms already decide what search results we see. They determine our driving routes or assign us the closest Lyft, and soon, they will enable self-driving cars and other autonomous vehicles. They’re matching job candidates with applicants. They recommend the next movie you should watch or the product you should buy. They’re figuring out which houses to show you and whether you can pay the mortgage. The more data we feed them, the more they learn about us, and they are getting better at judging our mood and intention to predict our behavior.

    I’ve been thinking a lot about these systems lately. My son has epilepsy, and I’m working on a project to gauge the sentiment towards epilepsy on social media. I’m scraping epilepsy-related tweets from Twitter and feeding them to a sentiment analyzer. The system calculates a score representing whether an opinion is positive, negative, or neutral.

    Companies already use sentiment analysis to understand their customers’ relationships. They analyze reviews and social media mentions to measure the effectiveness of an ad. They can inspect negative comments and find ways to improve a product. They can also see when a public relations incident turns against them.

    For the epilepsy project, my initial goal was to track sentiment over time. I wanted to see why people were using Twitter to discuss epilepsy. Were they sharing positive stories, or were they sharing hardships and challenges? I also wanted to know whether people responded more to positive or negative tweets.

    While the potential is there, the technology may not be quite ready. These systems aren’t perfect, and context and the complexities of human expression can confuse even humans. While “I [expletive] love epilepsy” may seem to an immature algorithm to express a positive sentiment, the effectiveness of any system built on top of them is limited by these algorithms themselves.

    I considered this as I compared two sentiment analyzers. They gave me different answers for tweets that expressed a negative sentiment. Of course, which was “right” could be subjective, but most reasonable people would have agreed that the tone of the text was negative.

    Like a child, a system sometimes gets a wrong answer because it hasn’t learned enough to know the right one. This was likely the case in my example. The answer given was likely due to limitations in the algorithm. Still, imagine if I built my system to predict the mood of a patient using an immature algorithm. When the foundation is wrong, the house will crumble.

    But, also like a child, sometimes they give an answer because a parent taught them that answer. Whether through explicit coding choices or biased data sets, systems can “learn wrong”. After all, people created these systems—people, with their logic and ingenuity, but also their biases and flaws. A human told it that an answer was right or wrong. A human with a viewpoint. Or a human with an agenda.

    We create these systems with branches of code and then teach them which branch to follow. We let them learn and show enough proficiency, and then we trust them to keep getting better. We create new systems and give them more responsibility. But somewhere, back in the beginning, a fallible human wrote that first line of code. It is impossible for those actions not to influence every outcome.

    These systems will continue to be pervasive, reaching into new areas of our lives. We’ll continue to depend on and trust them because they make our lives easier. And because they get it right most of the time. The danger is assuming they always get it right and not questioning an answer the feels wrong. “The machine gave me the answer, so it must be true” is a dangerous statement, now more than ever.

    We dehumanize these programs once they encounter the cold metal box in which they run. However, they are extensions of our humanity, and it’s important to remember their human origins.