*Based on our conversation with Mike Hudson, Founder of MHF, Knowbot and TestRAMP.
Large language models (LLMs) are a form of generative artificial intelligence (AI) that offer transformative opportunities for organisations with complex information ecosystems. But deploying them responsibly requires technical pragmatism, cultural awareness, and a respect for context.
Specialist AI donor the Mike Hudson Foundation has developed an LLM-powered ‘answer engine’ that sits on websites and answers users’ questions. Our recent conversation with MHF’s Foundersurfaced some valuable insights for data science practitioners working in this sphere.
1. Start With the Simplest Possible Use Case
One of Knowbot’s core design principles was minimal friction. MHF looked for a “gateway use case”: a low-risk, easy-to-understand tool that organisations could immediately see value in and adopt quickly.
Practitioner takeaway: Don’t begin with the most ambitious AI project your organisation can imagine. Begin with an easy, low risk project that still delivers value and treat it as a learning experience..
2. Culture Matters More Than Budget
Hudson notes that AI readiness among nonprofits varies widely and isn’t correlated with organisational size. Some large charities are slow to innovate due to bureaucracy; some small ones are enthusiastic but unlikely to benefit.
Practitioner takeaway: When planning an LLM deployment, assess cultural readiness, not just technical readiness. Ask:
Who are the internal champions?
How much AI literacy exists?
How cautious is the organisation by default?
This will drive adoption far more than infrastructure.
3. Build for Trust First, Then Functionality
The biggest obstacle MHF faced wasn’t the model, the infrastructure, or the code. It was accessing the right decision-makers and establishing trust with LLMs in general and Knowbot in particular.
Practitioner takeaway: AI deployments in nonprofits are trust projects as much as technical ones. Practitioners should:
Engage early with leadership.
Be explicit about risks and mitigations.
Provide clear, responsible documentation.
Avoid overclaiming what the model can do.
The more transparent the process, the smoother the adoption.

4. Where Appropriate, Restrict the Model’s Knowledge Domain to Reduce Risk
Knowbot deliberately confines itself to the content on the host organisation’s website(s), plus general internal LLM knowledge. It doesn’t trawl the open internet. This dramatically limits opportunities for hallucinations, unsafe advice, or reputational risk.
Practitioner takeaway: Whenever possible, design LLM answer engine systems that operate on curated, organisation-owned content. Domain restriction is one of the most effective forms of practical AI safety.
5. Expect Surprising User Behaviour — And Design for It
One of the unexpected patterns in early usage: people asked Knowbot, “Who are you?” This required the team to add a new prompt component and require every partner to host a “What is Knowbot?” page.
Practitioner takeaway: Build processes for:
Unexpected inputs
Prompt evolution
Iterative refinement
LLM deployment is never “set and forget.”
6. Technological Timing Matters — And Keeps Improving
Hudson emphasised that many capabilities now considered standard (e.g. long context length, ring fenced access to specific types of knowledge, server deployment ease) would have been impossible even a year earlier. The tools needed to fulfil a nonprofit’s evolving needs often appear in the LLM ecosystem soon after the nonprofit requests new functionality - making the decision whether to ‘build custom’ or ‘wait’ a tricky one.
Practitioner takeaway: Stay current. Model capabilities, guardrails, and hosting options evolve at high speed. What was impossible last quarter may be trivial today.
7. Value Impact Over Volume
Knowbot’s LLM processing costs MHF money, and soKnowbot’s team evaluates success not just by the number of questions answered but by the relevance of those questions to valuable decision-making. A tool that helps a policymaker or researcher retrieve something critical can have outsized impact.
Practitioner takeaway:When measuring impact, develop metrics that capture qualitative value, not just quantitative usage. For example you might consider:
Complexity of queries
Decision relevance
Equity of access
Whether the tool reduces burden on staff

8. In Fast-Moving Environments, Admit What You Don’t Know
Both Knowbot and TestRAMP were built in contexts where knowledge was changing daily. Hudson emphasises the importance of asking “naïve” questions, learning quickly, and not pretending expertise where there is none.
Practitioner takeaway: Cultivate humility. Curiosity and fast learning beats early certainty. Pair technical exploration with organisational openness about unknowns.
9. Relationships and Partnerships Are Everything
Across both initiatives, success depended less on algorithms and more on building new human relationships.
Practitioner takeaway: AI for public good is a team sport. Map stakeholders. Share progress transparently. Community buy-in creates technical resilience.
10. The Next Frontier: Agentic AI
Hudson argues we’re at a turning point where AI will expand from being “retrieval engines” to becoming “agentic systems that can do things.” With that shift comes both opportunity and new categories of risk.
Practitioner takeaway: Prepare now for agentic systems. Start with controlled automation, clear constraints, auditable logs, and robust governance. Retrieval is only the beginning.
Read our full conversation with Mike Hudson here.
Find out more about the Mike Hudson Foundation here.
Mike Hudson is an entrepreneur in technology & electronic markets. He now uses his expertise to help solve social problems. Mike founded TestRAMP, a pandemic nonprofit social market described as a “major contribution to Covid PCR testing & genomic sequencing” & donated its £2.4mn profits for charity. Mike is a Fellow of ZSL & adviser to its CEO. He is an honorary Research Fellow at City, University of London. Mike is a member of the Responsible AI Institute. He is a Foundation Fellow at St Antony’s College, University of Oxford.