A wearable AI product is finding a less-than-ideal use case
Smart glasses are increasingly marketed as convenient assistants for everyday life, but reports from China suggest they are also becoming tools for exam cheating. According to the supplied source text, a university student identified as Vivian used Rokid AI glasses to scan questions and display answers on an integrated screen, then began renting the device to classmates as a side business.
That detail captures the shift neatly. What might once have been framed as an isolated misuse now looks more like a small market. On secondhand platforms such as Xianyu, smart glasses are reportedly being rented for the equivalent of about $6 to $12 per day, depending on the model. That lowers the barrier for students who do not want to buy the hardware outright but still want access during a key exam period.
Why smart glasses change the cheating problem
Cheating technologies are not new. What changes with smart glasses is concealment and speed. The source text says students can use a ring-shaped controller to operate the devices covertly, helping them answer English and math questions. Because current products can closely resemble ordinary glasses, detection becomes harder for teachers and proctors who are not specifically watching for them.
The appeal of the hardware is obvious. Smart glasses promise hands-free access to information while looking relatively normal in a classroom. Add AI, translation, image analysis, or prompt-response capabilities, and the devices become powerful enough to undermine conventional exam supervision. A product category built around convenience and assistance can, in the testing context, quickly become an unfair advantage machine.
Institutions are starting to respond
China's secondary education system has reportedly begun banning the devices from national college entrance and civil service exams. That suggests administrators recognize the risk at the highest-stakes end of the testing pipeline. But the source also notes that many teachers have not yet tuned into the trend, which creates a familiar lag between consumer technology adoption and institutional response.
That lag is where misuse scales. When a device is visually subtle, widely available, and rentable, it does not need mass adoption to cause disruption. It only needs enough students to prove the method works. Once that happens, schools are pushed from ordinary anti-cheating enforcement into a more difficult problem: distinguishing normal wearables from networked or AI-assisted ones in real time.
The experiment that sharpened the concern
The supplied text cites an experiment in which researchers at the Hong Kong University of Science and Technology added OpenAI's GPT-5.2 model to a pair of Rokid glasses and had a student wear them during a stressful final exam week. The reported outcome was a final course score of 92.5 in an undergraduate computer communication networks class with more than 100 students.
That example does not by itself prove the technology will work equally well across all subjects or contexts. But it does show why the issue is moving quickly from curiosity to policy challenge. If wearable AI can meaningfully assist with real test performance while remaining hard to detect, then the problem is no longer speculative.
The broader lesson is uncomfortable but clear. As AI devices shrink and blend into ordinary objects, integrity systems built for phones and laptops become less adequate. Smart glasses are useful for many legitimate tasks. But the same features that make them helpful in daily life also make them potent in places where hidden assistance breaks the rules. In the classroom, that means educators are now facing a new kind of cheat sheet: one you can wear on your face.
This article is based on reporting by Futurism. Read the original article.
Originally published on futurism.com




