The education sector has been no exception to the wave of disruptions heralded by AI technology, and at the center of it all sits Faheem. The homegrown startup has positioned itself as an AI-powered assistant that helps students study, leveraging audio and visual learning methods. To better understand how the startup operates, we sat down with CTO Mohamed Ghareeb (LinkedIn). Edited excerpts from our conversation:
EnterpriseAM: What is Faheem’s elevator pitch?
Mohamed Ghareeb: Faheem is an AI tutor/companion that helps students study and revise using text, audio, and visual formats. It uses inquiry-based learning, an approach that adapts to the student based on their level of advancement. Using the student’s responses to certain questions, Faheem assesses the degree to which the student has a grasp of any topic.
EnterpriseAM: Why would a student use Faheem instead of existing large language models (LLMs), like ChatGPT?
MG: Faheem’s engine and knowledge base — which we spent years crafting — is basically our secret recipe; it is what distinguishes our offering from others. Faheem is curriculum-aligned, and its knowledge base is built using the curriculum. For an Egyptian student enrolled in a national school, Faheem will only use content from government-issued textbooks. Faheem uses retrieval-augmented generation (RAG) and knowledge graphs to retrieve data from the curriculum. By contrast, programs like ChatGPT respond to inquiries by resorting to the internet, whose data it is trained on.
Although ChatGPT might understand a certain lesson, it will answer an inquiry with information that is irrelevant to the student. Then there’s also the issue of hallucinations, a term used to describe nonsensical responses that AI programs produce based on inaccurate training data. LLMs that use the internet as their database are very likely to hallucinate. Faheem’s hallucination rate is dramatically lower than that of general-purpose LLMs because every response is grounded in verified, curriculum-aligned data. Our architecture is designed to virtually eliminate hallucinations.
EnterpriseAM: Are AI models more suitable than human-to-human models in the edtech sector?
MG: Young people today have really short study times — they don’t sit down to study for long periods. They want something fast that gives them information and doesn’t bore them. Of course, the presence of the teacher will remain very important. But while Faheem and conventional teaching can adopt similar methods, the application can be used for impromptu reviews, last-minute exams, and discussions. Younger folk today want to receive information in a different way. They are seeking a faster pace — that’s why Faheem exists.
EnterpriseAM: Can local edtech players integrate curricula into their software and pivot to a similar offering like Faheem’s?
MG: The real barrier to entry is the technical depth and consolidated expertise required to build this. In theory, anyone can take a textbook and use RAG to feed it into an LLM. By converting that book into numerical embeddings, a developer could create a tool that answers basic questions.
However, that basic approach loses two critical elements. First, it fails to account for the student’s specific grade and academic level. Second, it loses the context of the inquiry. Without context, an AI model merely retrieves the most similar-sounding statements from the textbook rather than actually “understanding” the student’s needs.
Faheem, by contrast, identifies both the student’s level and the context of the question before responding. It determines whether the student is asking a question while taking an exam, doing homework, or trying to understand the difference between two concepts. We achieved this by building a knowledge base that consists of described relationships between data points, rather than just simple embeddings. This allows the LLM to provide verified, curriculum-aligned answers tailored to the student’s specific situation.
EnterpriseAM: Is Faheem tied to a specific AI model, and how do you handle potential service disruptions from providers like OpenAI or Google?
MG: Faheem is LLM-agnostic, meaning we can work with GPT, Gemini, or even smaller language models. The secret recipe is our internal knowledge base. We use LLMs primarily to humanize answers retrieved from our verified data, rather than relying on the LLM to provide the core information. Because the know-how is in the knowledge base itself, we can adapt quickly and switch between models if a provider experiences a disruption.
EnterpriseAM: Is it feasible to run these models on your local servers, and what are the cost implications of doing so?
MG: It is feasible, and we have already experimented with AI sovereignty. At the last ICT conference, we successfully ran Faheem on a local machine, where it responded using a local LLM. The choice between local servers and cloud APIs depends heavily on utilization.
EnterpriseAM: What’s next for Faheem?
MG: There are two important features that will be released soon. The first is Scan and Solve, which will allow students to scan anything from the curriculum — and I mean anything: a question, an article, etc. — for Faheem to break it down for them.
The other feature is Smartboard, which is a highly interactive service that uses voice, visuals, and text to explain a certain topic. It will simulate a real teacher’s explanation, using both colloquial Arabic and English, and allow the student to interrupt with questions or requests for further elaboration at any point. Apart from these two features, we are working on integrating the IG curriculum into the app.
EnterpriseAM: How can Faheem assist government policymakers in understanding the educational landscape?
MG: It helps them significantly. We produce reports showing how a student’s performance has adapted or improved since they began using Faheem, including their specific answers and grades. This data helps the Education Ministry by providing analytics and insights that can help guide investment and policy decisions.
EnterpriseAM: Does the AI operate entirely independently, or is there a human element involved in the quality control of educational content?
MG: Humans play a major role. To build our knowledge base, we enlisted real teachers to perform validation and auditing. This ensures that the content the AI provides is verified and strictly aligned with the curriculum.