Inclusions, Exclusions,
and AGI — A Student’s Guide
This assignment sounds deceptively simple until you sit down to write it. Who gets included in the AGI future, who gets left out, and what determines the difference — these are not abstract questions. This guide gives you the frameworks, the arguments, the research sources, and the essay structure to handle the topic with the analytical depth your instructor is looking for.
🤖 Need expert help writing your Inclusions, Exclusions, and AGI essay? Our specialists are ready.
Get Essay Writing Help →What the Assignment Is Actually Asking — Before You Write a Word
An assignment on inclusions, exclusions, and AGI is asking you to analyze power. Specifically: who participates in building artificial general intelligence, who governs it, who benefits from it, and who bears its costs and risks — and what structural factors determine those distributions. It is a question about technology as a social and political phenomenon, not just a technical one.
Before you start writing, read your assignment prompt one more time. Most versions of this question are operating at one of three levels — sometimes simultaneously. Knowing which level your prompt is targeting will save you from writing a technically correct essay that misses the point.
Most strong essays engage all three levels. The weakest submissions treat the topic as purely about digital access — like it is a question about who has wifi. It is not. The inclusion/exclusion dynamic in AGI runs from who gets hired at the labs building these systems, to whose languages and cultural norms are reflected in their outputs, to which communities lose jobs first when AGI displaces labor, to which governments have any regulatory leverage at all.
That is a lot of ground. You do not need to cover all of it. But you need to be clear about which part of it your essay is addressing — and your introduction should signal that explicitly.
What Course Discipline Are You Writing From?
This matters more than students realize. An inclusions/exclusions/AGI essay written for a philosophy course will foreground ethical frameworks — utilitarian calculus, Rawlsian justice, capability approaches. The same topic in a sociology course will foreground structural inequality, race, class, and labor. In a political science course, it becomes a question about governance, regulation, and geopolitical power. In computer science ethics, it may focus on algorithmic bias and design choices. Know your discipline. Your theoretical framework and the thinkers you cite should reflect the analytical vocabulary of the course you are writing for.
Defining AGI With Enough Precision to Actually Argue About It
A lot of student essays on this topic go wrong in the first paragraph because they define AGI either too loosely (“any advanced AI”) or too futuristically (“a superintelligent robot that surpasses all human capabilities”). Neither definition is useful for an analytical essay about inclusion and exclusion.
Here is what the research literature actually says. Artificial General Intelligence refers to a hypothetical AI system capable of performing any intellectual task a human can perform — across domains, without task-specific training, at a level equal to or exceeding human competence. The key words are “across domains” and “without task-specific training.” Current AI systems — including large language models, image generators, and recommendation engines — are narrow AI. They do one thing or a set of related things extremely well. They cannot generalize. They fail completely outside their training distribution.
AGI, by definition, generalizes. It transfers learning from one domain to another the way humans do. No system has definitively achieved this as of 2026. But the trajectory of AI development — the rapid scaling of large models, the emergence of surprising cross-domain capabilities — has made AGI a live research target at major labs including OpenAI, Google DeepMind, Anthropic, and Meta AI.
Why the Definition of AGI Is Itself an Inclusion/Exclusion Question
This is a genuinely interesting analytical point worth making in your essay: the definition of AGI is contested, and the contest is not merely technical. What counts as “general” intelligence? The benchmarks used to test AI systems — standardized tests, coding challenges, mathematical reasoning tasks — reflect particular cultural and educational assumptions. A system optimized on Western educational assessment data may perform brilliantly on those benchmarks while failing tasks that require cultural knowledge, emotional attunement, or forms of practical reasoning that are not legible to the testing apparatus. Who designs the benchmarks determines what counts as “general” — and that is a governance and inclusion question wrapped inside a technical one.
For your essay, define AGI early and stick to that definition. If you are using it to mean advanced AI systems that are on the trajectory toward AGI, say so explicitly. If you are treating it as a future development whose inclusion/exclusion dynamics are being shaped by present decisions, say that too. Ambiguity in the definition will undermine every argument that follows.
What “Inclusion” and “Exclusion” Mean as Analytical Concepts
These terms do real theoretical work. In the social sciences, social exclusion refers to the processes by which individuals or groups are systematically prevented from participating fully in the social, economic, political, and cultural life of society. It is not the same as poverty, though poverty is often a mechanism of exclusion. It is structural, relational, and multidimensional.
Applied to AGI, exclusion operates across at least four distinct dimensions. All four are worth knowing before you write:
Production Exclusion
Who is excluded from building AGI systems? Developer workforce demographics, research lab geography, funding concentration, and academic pipeline barriers.
Governance Exclusion
Who is excluded from decisions about how AGI is regulated, what values it embeds, and what uses are permitted? Civil society, Global South governments, affected communities.
Access Exclusion
Who cannot use AGI systems due to cost, language, digital infrastructure, disability, or cultural misalignment in design?
Risk Exclusion
Who disproportionately absorbs the negative externalities — job displacement, surveillance, biased decision systems, environmental costs of compute?
Inclusion, symmetrically, refers to the presence and meaningful participation of diverse groups across these same dimensions. Not tokenism — not a diversity initiative that hires two people from underrepresented backgrounds and otherwise proceeds as before. Meaningful inclusion means that the perspectives, priorities, and interests of different groups actually shape what gets built, how it is governed, and who benefits.
Technology is not neutral. Every design choice encodes assumptions about whose problems matter, whose context is the default, and whose experience is considered an edge case.
— Central premise of critical AI studiesSix Frameworks for Analyzing Inclusions and Exclusions in AGI
You do not need all six. Pick one or two that fit your course’s theoretical vocabulary and argue from them consistently. The mistake is to name-drop frameworks without actually applying them — mentioning “critical race theory” in the intro and then writing a generic essay that could have been written without any framework at all.
Critical AI Studies
Examines whose values, assumptions, and interests are encoded in AI systems
Political Economy of Technology
Analyzes who owns, controls, and profits from AGI — and how capital concentration shapes access
Postcolonial Technology Studies
Examines how AGI development reproduces colonial power dynamics at global scale
Capability Approach (Sen / Nussbaum)
Asks whether AGI expands or contracts what people are substantively free to be and do
Rawlsian Justice and Distributive Ethics
Evaluates AGI outcomes against a principle of fairness to the least advantaged
Intersectionality (Crenshaw)
Examines how multiple axes of identity compound patterns of exclusion
The Main Axes of Exclusion in AGI — What the Evidence Shows
This section covers the specific arguments your essay may need to engage. Pick the axes that are most relevant to your assignment prompt and your chosen framework. Do not try to cover all of them — a focused essay that develops two or three well-evidenced arguments will score higher than a survey that touches seven points superficially.
Labor Displacement and Class Exclusion
Who loses jobs first, and who captures the productivity gains?
The labor displacement argument is the most widely discussed axis of exclusion. AGI — and even current narrow AI — threatens to automate a broad range of cognitive and physical tasks. The key analytical question is not whether displacement will happen, but how it will be distributed. The historical pattern of automation is fairly clear: workers in routine-intensive roles, with lower formal education, and with less economic and political leverage are displaced first and recover slowest.
What makes AGI different from previous automation waves is its breadth. Previous waves primarily displaced manual, routine, and low-skill work. AGI targets cognitive work across the skill spectrum. Research from Goldman Sachs estimated in 2023 that roughly two-thirds of jobs in the United States have some degree of exposure to AI automation — and that professional services, legal work, and financial analysis are among the most exposed sectors. That changes the political economy of displacement significantly: when lawyers and accountants are displaced alongside warehouse workers, the lobbying dynamics are different.
The inclusion/exclusion question is sharpest here when you look at who captures the gains. If AGI dramatically increases productivity, the distribution of those productivity gains depends entirely on ownership structures, tax policy, and labor bargaining power — none of which are determined by the technology itself. An essay that treats labor displacement as purely a technical problem misses the political economy entirely.
Key evidence and thinkers to engage: Daron Acemoglu and Simon Johnson’s Power and Progress (2023) argues that automation’s distributional effects are not inevitable but are determined by political choices about how productivity gains are allocated. Erik Brynjolfsson’s work on the productivity paradox is also relevant.
Geographic and Geopolitical Exclusion
The Global South as resource base, not governance participant
The geography of AGI development is stark. As of 2025, the overwhelming majority of frontier AI research is conducted by organizations headquartered in the United States and China — with a secondary cluster in the UK, France, and Canada. The compute infrastructure that makes large-scale AI training possible is concentrated in data centers in these same countries. The venture capital financing AI development flows primarily to companies in these regions.
Against this, the Global South contributes essential inputs to AGI development — raw materials for semiconductor manufacturing, low-cost data labeling and annotation labor, and the behavioral data generated by billions of users — while receiving essentially no governance representation. The AI governance processes that will establish safety standards, liability frameworks, and use restrictions are dominated by the United States, the European Union, and China. Countries in sub-Saharan Africa, South Asia, and Latin America are largely absent from these conversations despite being home to more than five billion people who will live under the resulting governance regime.
This is the postcolonial argument in its sharpest form: AGI reproduces a colonial division of labor in which the Global South provides the raw inputs and the Global North retains the value, the intellectual property, and the decision-making authority.
Key evidence: The AI Index published annually by Stanford’s Human-Centered AI Institute provides detailed data on the geographic concentration of AI research and investment. The work of researchers at the Distributed AI Research Institute (DAIR) — particularly Abeba Birhane — addresses Global South exclusion from AI governance directly.
Language, Culture, and Representational Exclusion
When the training data does not include you, the system does not work for you
Training data is the raw material of AI systems, and the training data that shapes current large language models is overwhelmingly in English — with secondary representation of a small number of other high-resource languages like Chinese, Spanish, French, and German. Of the approximately 7,000 languages spoken globally, fewer than 100 have any meaningful representation in major AI training datasets.
This is not a minor technical limitation. It means that AGI systems trained on this data will reflect the cultural assumptions, narrative patterns, reasoning styles, and value systems of the dominant linguistic communities in the training corpus. For speakers of Yoruba, Swahili, Tamil, or Quechua, AGI systems will perform worse — often dramatically worse — on tasks requiring cultural knowledge, idiomatic language understanding, or context-dependent reasoning. The system is not neutral. It has a cultural center of gravity, and that center is English-speaking and Western.
At a deeper level, the exclusion operates through what gets counted as knowledge and whose epistemic practices get validated. Indigenous knowledge systems, oral traditions, embodied knowledge, and community-based understanding are largely absent from the text corpora that train these systems — not because they are inferior, but because they are not well-represented in digitized, searchable form.
Democratic Exclusion from AI Governance
Who decides how AGI is built and constrained — and who does not get a vote
AI governance is happening right now, in a very particular institutional form. Major labs publish voluntary safety commitments. Governments issue executive orders and frameworks. International bodies convene conferences. The EU AI Act is the most comprehensive binding regulatory framework so far. All of this is real governance activity — and almost none of it involves the meaningful participation of the communities most likely to be affected by AGI deployment.
This is what political scientists call a democratic deficit: consequential decisions are made without the consent or participation of those governed by them. The people most likely to lose jobs to AGI automation have no formal role in shaping the deployment timeline or the safety requirements. Communities where predictive policing algorithms will be applied have limited recourse when those systems harm them. Future generations — arguably the most consequentially affected stakeholders in any decision about transformative technology — have no representation at all.
The governance exclusion argument connects directly to the question of whose values AGI embeds. If governance processes are dominated by AI labs, technology investors, and a small number of Western democratic governments, the resulting frameworks will tend to reflect and protect those actors’ interests. That is not a conspiracy — it is how governance works when participation is unequal.
Key evidence: The Stanford AI Index tracks AI policy developments globally and includes data on which governments are active in AI governance and which are not. The Governance of AI Program at Oxford’s Future of Humanity Institute has produced significant work on democratic participation in AI governance.
The Case for Inclusion — Arguments You Need to Know and Engage
A strong essay on this topic does not just catalog exclusion. It engages seriously with the arguments for inclusion — both the genuine promise and the ways that promise is complicated or qualified. Here are the main claims worth knowing.
The Universal Access Argument
AGI as a potential equalizer — lowering the cost of expertise across the world
The Diversity Dividend Argument
Inclusive development produces better, safer, more robust AGI systems
The Regulatory Inclusion Argument
Broader governance participation produces more legitimate, more durable regulatory frameworks
How to Structure Your Essay on Inclusions, Exclusions, and AGI
Most essays on this topic benefit from a clear argumentative structure rather than a survey approach. Here is a reliable structure you can adapt to your word count and prompt requirements.
Introduction: Define Your Terms and Name Your Argument
Define AGI with appropriate precision for your course level. Define inclusion and exclusion as analytical concepts — not just their common-sense meanings. Then state your central argument clearly: what is the main claim your essay will defend? The strongest introductions name a specific tension or paradox — for example, that AGI is being developed in the name of human flourishing while being built by institutions and processes that systematically exclude most of humanity from its governance and benefits. Your framework should be named here too — one or two sentences is enough to orient the reader without turning the intro into a literature review.
Section One: Establish the Stakes
Before you can argue about inclusion and exclusion, your reader needs to understand why AGI is different from previous technology developments — why the inclusion/exclusion dynamics here carry more weight than they did for, say, the smartphone. This section establishes AGI’s transformative potential (and risk) in terms that justify the analytical attention you are about to give to who is in and who is out. Keep it brief — one to two paragraphs. The point is to motivate the analysis, not to write a technology explainer.
Section Two: The Exclusion Analysis
This is the core of your essay. Select two or three axes of exclusion from the list above — the ones most relevant to your prompt and framework. For each, present the argument, support it with specific evidence (statistics, research findings, concrete examples), and connect it back to your chosen analytical framework. Do not just describe patterns of exclusion — explain the mechanisms that produce them and the structural conditions that sustain them. What makes this section analytical rather than descriptive is the causal explanation: not just “women are underrepresented in AI development” but “why is that the case, what structures maintain it, and what would have to change?”
Section Three: Engage the Counter-Arguments
Take the inclusion arguments seriously. If your essay ignores the genuine promise of AGI for democratizing access to expertise, a careful reader will notice the gap. The analytical task is not to dismiss these arguments but to qualify them: under what conditions does the universal access argument hold? What structural preconditions are required? Is the diversity dividend argument already being acted on, or is it mostly aspirational? Engaging the counter-arguments makes your overall position stronger, not weaker, because it shows you understand the complexity of the issue rather than just cherry-picking evidence for one side.
Section Four: Implications and Prescription (If Your Prompt Requires It)
Some assignment prompts ask you to evaluate existing proposals or make recommendations. If yours does, this is where you do it. What would meaningful inclusion look like in AGI development and governance? What policy interventions, institutional reforms, or technical design choices would address the exclusions you have identified? Be specific. “More diversity is needed” is not a policy prescription. “Requiring mandatory demographic audits of AI development teams as a condition of regulatory approval, combined with public compute access programs for researchers outside the US and China” is a prescription — debatable, specific, and grounded in the analysis you have already done.
Conclusion: Return to the Central Tension
Your conclusion should do two things. First, summarize your central argument — not by restating everything you said, but by articulating the core insight your analysis has developed. Second, point outward: what does your analysis imply about the decisions being made right now? The best conclusions on this topic resist false resolution. The inclusion/exclusion dynamics around AGI are not solved problems. They are active contests. Ending with a clear-eyed statement of what is at stake — and what choices would tip the balance toward greater inclusion — is more intellectually honest than a tidy resolution.
Thesis Statement: Weak vs. Strong
Essay Writing QualityWeak thesis (description, not argument):
Strong thesis (specific claim + framework + stakes):
The strong version commits to a specific argument, names the framework being used, identifies the scope of the analysis (three axes of exclusion), and implies a normative stance (these choices can and must be contested) without turning the intro into a policy brief. It tells the reader exactly what the essay will argue before the first body paragraph.
Research Sources That Will Actually Hold Up Under Grading Scrutiny
This is a field that moves fast. An essay written in 2026 that cites only sources from 2019 will look outdated. But recency is not everything — a Twitter thread from a respected researcher is not a citable source. Here is a map of reliable sources by type.
| Source Type | Where to Find It | Best For |
|---|---|---|
| Peer-reviewed AI ethics research | ACM Digital Library, arXiv (cs.AI, cs.CY), Google Scholar | Technical bias evidence, algorithmic fairness, governance frameworks |
| Stanford AI Index (annual) | aiindex.stanford.edu — free, publicly available | Data on global AI investment, research geography, governance activity |
| Books from established scholars | Crawford’s Atlas of AI; Acemoglu & Johnson’s Power and Progress; Noble’s Algorithms of Oppression | Theoretical frameworks; sustained argument with evidence |
| Policy documents and reports | EU AI Act text (EUR-Lex); OECD AI Policy Observatory; UK AI Safety Institute reports | Governance and regulatory landscape; inclusion/exclusion in policy design |
| Think tank research | Brookings; AI Now Institute; Centre for the Governance of AI (Oxford) | Policy analysis; governance recommendations; labour market research |
| Primary research papers | Gebru et al. “Stochastic Parrots” (2021); Buolamwini & Gebru facial recognition study (2018) | Direct evidence of technical bias and representational failure |
The Stanford AI Index: Your Starting Point for Data
The Stanford HAI AI Index is published annually and is freely accessible. It compiles data on AI research output by country, investment flows, workforce demographics, governance policy activity, and public opinion on AI. For an essay on inclusions and exclusions, the geographic concentration data — which countries are publishing research, where compute infrastructure is located, which governments are active in AI governance — gives you empirical grounding for claims that might otherwise feel like unsupported assertions. Cite the specific edition by year; the 2025 and 2026 editions will have the most current data.
One source trap to avoid: major AI labs publish significant amounts of research, policy analysis, and blog commentary. OpenAI, Google DeepMind, Anthropic, and Meta AI all produce substantial written material. This material is worth reading and some of it is publishable-quality research. But you should be aware of the source when you cite it — an Anthropic white paper on AI safety is not a disinterested academic source. It is produced by an organization with significant commercial interests in how AI safety is defined and regulated. That does not make it worthless, but it should be cited with appropriate awareness of its provenance.
The Analytical Mistakes That Cost Students Marks on This Topic
What Earns High Marks
- A clear, specific definition of AGI in your introduction
- A named analytical framework applied consistently throughout
- Arguments supported by specific evidence — data, studies, concrete examples
- Genuine engagement with the strongest version of the counter-argument
- Distinguishing between current AI limitations and the AGI trajectory
- Connecting technical design choices to governance and power structures
- Academic sources, not news articles or lab blog posts as primary citations
What Loses Points Fast
- Treating AGI and current AI interchangeably without acknowledging the difference
- A technology explainer in place of an analytical argument
- Vague claims about “bias” without identifying its source, mechanism, or evidence
- Dismissing inclusion arguments as naive without engaging their strongest form
- Limiting the analysis to the United States or the Global North
- No framework — just a list of observations about who has more or less access to tech
- Confusing description (“exclusion exists”) with analysis (“why it exists and what sustains it”)
The Techno-Determinism Trap
The most common analytical error in this essay type is treating AGI’s social effects as determined by the technology itself — as if the distributional outcomes of AGI are fixed in the system rather than shaped by political, economic, and governance choices. Statements like “AGI will inevitably displace workers” or “AGI will democratize expertise” commit this error. The technology creates possibilities and constraints; what happens within those constraints is determined by human decisions about ownership, access, regulation, and redistribution. Your essay should be consistently clear that the inclusion/exclusion outcomes being analyzed are the results of choices — contestable, revisable choices — not technological fate.
FAQs About Writing on Inclusions, Exclusions, and AGI
What Makes This Essay Hard — and What Makes It Worth Writing Well
The difficulty of this essay is not the research. There is plenty of it. The difficulty is that the topic sits at the intersection of at least three fields — technology, ethics, and political economy — and a compelling analysis requires you to hold all three in view simultaneously without collapsing the argument into a single lens.
Technology determinism is the enemy. So is naive optimism. So is pure critique that never engages with the genuine promise of AGI. The strongest essays on this topic are the ones that take the complexity seriously: that acknowledge the real potential of AGI to expand human capability and reduce the cost of expertise, while being clear-eyed about who currently controls AGI development, what values are being embedded in these systems, and what structural conditions would have to change for the inclusive scenario to actually materialize.
That is a harder essay to write than either “AGI will save us” or “AGI will harm marginalized communities.” But it is the analytical position that the evidence and the frameworks actually support — and it is the position that earns marks from instructors who know this literature.
Start with a specific definition. Pick one framework and apply it. Find two or three axes of exclusion with strong evidence. Engage the best counter-arguments. State your thesis clearly in the introduction and deliver on it. If you need support at any stage — from argument development to full essay completion — the writing specialists at Smart Academic Writing are available for essay writing help, philosophy assignments, research papers, and editing and proofreading across every level and deadline.