Artificial Intelligence provides an algorithm to undertake any computerised task. This introduces the inherent risk of using AI, where a particular interpretation of an input event is processed as per the coded algorithm which can be different to the standard processing of the event by the user.
Agnostic Development is a contemporary phenomenon, indeed providing a much glorified term for what would otherwise be known as coding-in-the-dark. Both would nevertheless mean that the developer of AI, does not actually know how the standards methods of processing work, but would develop coding assuming a Terra Nullis, meaning that there exists nothing before. Such assumptions are quite dangerous, and a Zero-Based approach when applied to the real world would cause many accidents and unwarranted events.
A good example of Agnostic Development, would be when buying a mobile phone technology that uses latest 6G, but where the associated wireless services would not provide such high speed data with a comparable telephone exchange. Here an increasing health risk is introduced by becoming such an Agnostic User.
Another example is using AI in Accounting Software. Here, popular AI methods would displace standard accounting practices. When businesses buy a foreign exchange to make payments to overseas buyers and the value of the holding in local currency changes, in standard accounting, this would be treated as Other Income. However, popular accounting software use AI to adjust the buying prices of the foreign currency, thereby causing an accounting fraud. The professional accountant using this software would be held accountable by using such Agnostic Development.
The motivation of using Agnostic methods in AI that may bring much financial investments from entities would gain from changing standard practices. However, when given an opportunity to disclose what such standard practices would be replaced by, a better option, that is safer, time-proven and universally acceptable, remains unavailable, even to the most Agnostic User.
- Log in om opmerkingen te posten
Reacties
Thanks for putting language on this – “coding in the dark” over established domain standards is exactly where a lot of quiet risk sits.
In my own work I frame this as a nature vs representation problem:
• N(S) – what the system actually is and does (including how it handles established standards like IFRS/GAAP, telecom limits, etc.)
• R(S) – how it presents itself to users (e.g. “IFRS-compliant accounting software”, “standards-based network solution”).
When N(S) quietly departs from domain best practice, but R(S) still claims to stand on that practice, the gap D(S) = d(N,R) becomes large. That’s very close to what you call Agnostic Development: acting as if the world is Terra Nullius while still borrowing the trust attached to existing standards.
I’ve been developing an open framework called Reality-Aligned Intelligence (RAI) and a companion audit stack RAA that try to make this gap measurable and auditable:
• RAI metaframework (ontological honesty / N–R gap):
Reality-Aligned Intelligence (RAI): A Metaframework for Ontologically Honest AI Systems – DOI: 10.5281/zenodo.17686975
• RAI governance & audit stack (who checks what, and how):
Reality-Aligned Auditing (RAA): A Governance Stack for Ontologically Honest, Relationally Safe AI – DOI: 10.5281/zenodo.17814922
I think there’s a strong connection between your “agnostic vs standards-aware” distinction and an explicit N/R-gap check in AI governance and tooling (e.g. scanners, impact assessments).
If you’re interested in exploring that overlap, I’d be glad to compare notes.
Niels Bellens – independent researcher, AI & relational safety
ORCID: 0009-0008-1764-4108
Email: niels.bellens@proton.me
- Log in om opmerkingen te posten