We’ve always known that the pace of law and policymaking can never catch up with the pace of technology.
But with AI, this gap is only widening, at an accelerating rate. How do we shape something we don’t fully understand, which we know is still rapidly changing, and is already profoundly shaping society and our future? How — assuming we should and could — do we legislate for a future that’s still being written in real time?
Many will say regulation risks stifling innovation, or that AI is “just a tool” and we risk overreacting. But tools, especially powerful ones, always need rules. The point isn’t to slow progress. It’s to make sure it doesn’t trample people on the way forward.
As a new Congress settles in, here are some rules that I think policymakers should study and consider.
First: Make Accountability Real
AI isn’t just writing essays and making memes. It’s making real, life-changing decisions. Who gets a loan? Who’s flagged for fraud? Who’s prioritized for medical care or quietly de-platformed online?
And when those decisions go wrong - as they sometimes will - the fallback can’t just be a shrug and a vague: “It’s the algorithm.” That’s not good enough.
What should AI legislation actually do?
· Require auditability – AI providers (tech and social media companies, government, public service providers, and whoever else runs these platforms and services) must be able to explain what their systems are doing, what data they’re using, and how decisions are made.
· Assign responsibility – Whether it’s the developer, deployer, or procurer, someone has to be legally and clearly accountable when harm is done.
· Create redress mechanisms – If an AI system wrongly cuts my utilities, blocks my loan application, flags me as a threat, or spreads falsehoods about me, I should be able to challenge it. “Talk to the chatbot” is not a real appeals process.
· Require explicit consent for use of intellectual property – This ship may have already sailed, but moving forward, the use of copyrighted works to train AI systems should be opt-in, not assumed. Creators (particularly writers, artists, musicians, scientists, researchers, and journalists) deserve the right to decide if and how their work is used, and to be fairly compensated when it is.
· Demand explainability in high-stakes areas – In fields like education, policing, hiring, or credit scoring, black-box algorithms shouldn’t operate without human oversight and clear justifications for their decisions.
In short: In high-stakes decisions, there always has to be a human in the loop. Legislation must make it possible for any aggrieved person to ask “Who decided this?”— and get a straight answer.
Second: Don’t Smother the Goose That Lays the Golden Eggs
At the same time, let’s recognize that overregulating too early risks freezing innovation before we’ve fully explored what’s possible. Worse, it could drive the best ideas and people into less regulated (and less ethical) jurisdictions.
Regulation shouldn’t be about creating burdensome red tape. It should mean clear, fair boundaries — ones that foster trust, level the playing field, and keep innovation from tipping into exploitation. Markets do their best work when people trust the system.
So we need laws that are smart, flexible, and focused on outcomes, not just compliance checklists.
· Regulate uses, not tools – Don’t outlaw language models. Don’t ban access to social media. Regulate what happens when these tools are used to deceive, impersonate, or manipulate.
· Encourage sandboxing – Create safe spaces where startups and researchers can test AI under oversight, without fear of instant legal blowback.
· Support competition – Empower the Philippine Competition Commission and other sector regulators. Don’t make compliance so costly that only Big Tech can afford to play. Open-source and nonprofit projects need room to thrive, too.
The message should be: Innovation is welcome. We just want to make sure that it does not come at the expense of rights, dignity, or democratic trust.
Third: Build Widespread Information Literacy
None of these work without education. Not just coding bootcamps or digital skills programs (though those are important), but deeper, broader information literacy.
People need to understand what these technologies are, how they work, and how to navigate both their promises and their pitfalls. That applies not just to kids in school, but to adults trying to keep up with a world that’s moving faster than most of us can reasonably absorb.
These efforts need to be creative and engaging. And, to really make a dent, this kind of work needs to be collaborative, systemic, and inclusive.
It’s not enough to build better tools. We, as a society and as individuals, need to invest in better understanding.
· Integrate AI literacy into national curricula – Introduce basic concepts like data ethics, and critical digital literacy in K-12 education, not just in computer science classes, but across subjects like civics, media, and even art.
· Establish lifelong learning incentives – Offer tax credits, stipends, or free training for adult learners to acquire digital and AI-related skills, with a focus on underserved communities and workers in transitioning industries.
· Encourage cross-sector collaboration – Create public-private partnerships between government, tech companies, universities, and civil society to produce high-quality, nonpartisan educational resources on AI.
· Expand AI education in higher ed and vocational training – Encourage universities and trade schools to offer AI-related programs that serve both tech specialists and non-tech professionals who will need to work alongside AI tools.
· Audit and improve public sector literacy – Ensure government officials and public employees—especially those in policymaking, education, and law enforcement—receive regular training in AI concepts, risks, and use cases relevant to their roles.
Fourth: Respect the Planet
Then there’s the environmental toll. Training and running generative AI takes serious computing power. Some studies suggest that training a single large model can emit as much CO₂ as several cars over their entire lifespan.
The issue isn’t you using AI. It’s that everyone is. Just like the problem with cars isn’t that you drove to the store once. It’s that we built our entire infrastructure around driving.
So the real problem is one of scale.
Guilt-tripping individuals for using practical tools won’t move the needle. Most people are just trying to stay afloat, work more efficiently, stay competitive, or save time. You can’t fault someone for not wanting to fall behind in a system everyone knows can help make lives and careers easier.
What actually will make a difference? Smarter energy policy.
· Provide real incentives for clean infrastructure. If AI is going to be a part of modern life, its environmental footprint can’t be shouldered by individuals alone. It has to be addressed at the system level. Consider tax breaks or subsidies for clean energy in AI infrastructure.
· Push tech companies to transition to green data centers.
Fifth: Make Competitiveness a Policy Priority
If we’re serious about building a future-ready society, we need to treat competitiveness as a core part of AI policy (and vice versa), and not just something the private sector figures out on its own. Our ability to thrive in the AI era will depend on whether we make the right public investments today.
AI won’t just reshape tech companies. It will touch every sector. In healthcare, it’s improving diagnostics and tailoring treatments. In agriculture, it’s optimizing crop management and predicting yields. In construction, it’s being used for site safety, planning, and resource allocation. Retail is using AI to streamline supply chains and personalize experiences. In energy, it’s helping predict demand and reduce waste. This isn’t a fringe innovation — it’s a fundamental shift in how every industry operates.
That means our policy response has to be just as wide-reaching. It’s not enough to focus only on innovation hubs or displaced workers.
· Invest in national AI workforce development – Create targeted funding programs to support training and reskilling across all sectors, not just in tech. Prioritize industries being rapidly transformed by AI—like healthcare, manufacturing, agriculture, and energy.
· Support small and medium enterprises (SMEs) in AI adoption – Provide subsidies, training, and technical support to help SMEs integrate AI solutions in ways that actually increase productivity without displacing their workforce.
· Establish regional AI innovation hubs – Fund and support the development of AI innovation centers outside major metro areas, ensuring that economic opportunities tied to AI are more evenly distributed across the country.
· Create national standards for AI readiness in business and public services – Develop benchmarks and guidelines for what AI literacy and integration should look like across sectors, to ensure a shared baseline for innovation and accountability.
· Link competitiveness to equity – Ensure that any national AI strategy includes goals for reducing digital divides—whether geographic, socioeconomic, or generational—so that progress doesn’t come at the cost of widening inequality.
The countries that win the future will be the ones that democratize access to AI not just through infrastructure and platforms, but through education and opportunity. If we want to compete, we have to make sure the benefits of AI don’t get hoarded by a few companies or concentrated in a few cities.
Competitiveness has to mean capacity for everyone.
We must build a policy framework for AI that’s grounded in accountability, openness, and a sense of proportion — one that reflects both the risks and the promise of this technology.
The worst outcome isn’t moving too cautiously. It’s doing nothing at all, and letting systems evolve without ever deciding what kind of future we actually want.
Again, the point isn’t to stop progress. It’s to shape it. Because AI isn’t just a tool — it’s a force multiplier. One that can entrench inequities or expand access. Concentrate power or democratize it.
Our future depends on the choices we make now, while we still can.
… … … … … … … …