Don't believe the hype:
🛠️ Automated #accessibility testing tool vendors will tell you their tool can find more issues than it can.
🤖 #AI infused accessibility testing tool vendors will tell you their tool can find more issues than it can.
https://lnkd.in/eWK-8ssK
I agree with everything in your article. However, although it says it has been updated to include all 55 success criteria, 5 are missing - 1.4.3, 2.3.1, 2.5.1, 2.5.2 and 2.5.4.
Your summary appears to suggest that 6 success criteria can be automated, but in the detailed analysis all success criteria have caveats. I would not disagree with the latter, and in some cases I think you have been over-generous.
BTW, the second paragraph contains a typo - "additonal ".
Although automated accessibility testing has it's role, it still can not replace manual auditing by professionals. Best case scenario, make sure people with disabilities are included in testing.
AI agents are becoming the next generation of users, and they’ll interact with your website the same way people do.
Tools like OpenAI’s Operator are already navigating interfaces, clicking buttons, and completing tasks. The catch: their success depends on how accessible your digital experience is.
Just like people, AI agents need structure, clarity, and consistency to understand and act.
Visual agents rely on clean design, consistent components, and clear labels.
Data-driven agents depend on semantic HTML, ARIA roles, and well-documented APIs.
Accessibility gives you the foundation for both. It’s what makes your site usable, and now, it’s what makes your site readable to AI.
Companies that apply accessibility best practices today are preparing for the next wave of AI interaction while simultaneously making the user experience better for everyone.
Read more on how accessibility supports AI agent success:
https://lnkd.in/eMj-trqt#WebAccessibility#AI
💡 AI is changing how we build websites — but not always for the better.
Many developers use ChatGPT or Claude to generate sites in minutes… but end up shipping code they don’t fully understand.
In my latest blog, I’ve broken down the real problems with AI-coded websites and how to fix them:
Defining project goals before prompting
Structuring architecture the right way
Writing secure, understandable code
Using tests, analytics, and compliance tools
Whether you’re a solo founder or full-stack developer, this guide helps you move from AI-generated to AI-assisted craftsmanship.
🔗 Read the full post: https://lnkd.in/gUAGS6fR#AI#WebDevelopment#ChatGPT#Claude#Automation#CleanCode#Nextjs#Developers#AICoding
🚀 20 AI Prompts That Will Supercharge Your Web Development!
AI is transforming how we build websites — faster, smarter & more creative than ever! 💻
Here are 20 powerful ChatGPT prompts to make your next web project shine ✨
।
💡 1. Website Structure & Planning
✅ “Generate a sitemap and page structure for a [business type] website.”
✅ “Suggest essential sections and features for a modern portfolio website.”
✅ “Create a wireframe layout idea for a responsive landing page about [product/service].”
✅ “What are the must-have components for a professional business website in 2025?”
🎨 2. UI/UX & Design Ideas
✅ “Suggest a clean, modern color palette and typography for a [niche] website.”
✅ “Generate UI/UX improvement ideas for a homepage that isn’t converting well.”
✅ “Explain how to design an accessible and user-friendly navigation bar.”
✅ “Give me 3 hero section ideas for a SaaS landing page.”
⚙️ 3. Coding & Development
✅ “Write clean HTML, CSS, and JS code for a responsive navbar with dropdowns.”
✅ “Generate a React component for a pricing card with hover animation.”
✅ “Suggest performance optimization techniques for a Next.js website.”
✅ “Write a contact form using PHP that sends email with validation.”
✍️ 4. Content & Copywriting
✅ “Write SEO-friendly content for a homepage of a web development agency.”
✅ “Generate meta titles and descriptions for a [service] page.”
✅ “Write CTA lines for web design and development landing pages.”
✅ “Create an About Us section that sounds trustworthy and creative.”
🚀 5. SEO & Performance Optimization
✅ “Suggest on-page SEO improvements for a WordPress website.”
✅ “Generate a checklist for website launch and SEO setup.”
✅ “Analyze and suggest ways to reduce page load time.”
✅ “List technical SEO practices for React or Next.js websites.”
✨ Try these prompts today and see how AI can boost your workflow!
#WebDevelopment#AItools#ChatGPT#CodingTips#WebDesign#DeveloperTools#Productivity
🤖 3 Types of AI Agents PMs Need to Understand
1️⃣Browser-Based Agents
𝘌𝘹𝘢𝘮𝘱𝘭𝘦𝘴: 𝘖𝘱𝘦𝘯𝘈𝘐 𝘖𝘱𝘦𝘳𝘢𝘵𝘰𝘳, 𝘢𝘶𝘵𝘰𝘯𝘰𝘮𝘰𝘶𝘴 𝘣𝘳𝘰𝘸𝘴𝘦𝘳 𝘢𝘨𝘦𝘯𝘵𝘴
→ Navigate websites like humans do → Vision model "sees" the page, decides where to click
→ 📦 Sandbox: None (relies purely on web interaction) How it works: Screenshots webpage → Vision model interprets → Executes clicks/inputs
→ ⚠️ The catch: Painfully slow (20-60 min per task). High token costs. Fails on complex/unfamiliar UIs.
→ 💡 PM implication: Great demos, poor production. Users won't wait 30 minutes. Can't handle edge cases.
2️⃣ 𝗟𝗶𝗺𝗶𝘁𝗲𝗱 𝗦𝗮𝗻𝗱𝗯𝗼𝘅 𝗔𝗴𝗲𝗻𝘁𝘀
𝘌𝘹𝘢𝘮𝘱𝘭𝘦𝘴: 𝘎𝘦𝘮𝘪𝘯𝘪 𝘗𝘢𝘳𝘬 (𝘴𝘭𝘪𝘥𝘦𝘴/𝘴𝘩𝘦𝘦𝘵𝘴 𝘢𝘨𝘦𝘯𝘵𝘴), 𝘷𝘦𝘳𝘵𝘪𝘤𝘢𝘭-𝘴𝘱𝘦𝘤𝘪𝘧𝘪𝘤 𝘵𝘰𝘰𝘭𝘴
→ Code execution with pre-approved packages only → Generates slides, sheets, data analysis
→ 📦 Sandbox: Restricted environment with 3-5 whitelisted packages How it works: LM writes code → Runs in controlled sandbox → Returns output (can't download new packages mid-execution)
→ ⚠️ The catch: Can't adapt to novel tasks. If the package isn't pre-installed, it fails.
→ 💡 PM implication: Fast and reliable for defined use cases. Scales well. But inflexible—you're choosing the sandbox capabilities upfront.
3️⃣ 𝗢𝗽𝗲𝗻 𝗦𝗮𝗻𝗱𝗯𝗼𝘅 + 𝗕𝗿𝗼𝘄𝘀𝗲𝗿 𝗛𝘆𝗯𝗿𝗶𝗱
𝘌𝘹𝘢𝘮𝘱𝘭𝘦𝘴: 𝘊𝘩𝘢𝘵𝘎𝘗𝘛 𝘈𝘨𝘦𝘯𝘵 (𝘋𝘦𝘦𝘱 𝘙𝘦𝘴𝘦𝘢𝘳𝘤𝘩 + 𝘖𝘱𝘦𝘳𝘢𝘵𝘰𝘳), 𝘔𝘢𝘯𝘶𝘴
→ Full code execution + web navigation → Can install packages, access websites, generate files
→ 📦 Sandbox:
Open environment; can download any package, run complex scripts
→⚙️ How it works: LM decides whether to use browser OR sandbox OR both → Chains tools together → Human approval for payments
→ ⚠️ The catch: Most powerful but slowest and most expensive. Browser bottleneck remains.
→ 💡 PM implication: Handles complex professional workflows but costs 4-10x more than a limited sandbox. Speed is still a dealbreaker.
𝗪𝗵𝗮𝘁 𝗧𝗵𝗶𝘀 𝗠𝗲𝗮𝗻𝘀 𝗳𝗼𝗿 𝗬𝗼𝘂𝗿 𝗥𝗼𝗮𝗱𝗺𝗮𝗽:
☑ Choose your constraint — Browser flexibility vs. API speed. You can't have both.
☑ Sandbox scope = product scope — A limited sandbox isn't a compromise, it's a product decision about what problems you'll solve.
☑ Professional workflows first — B2B tools have APIs. Consumer apps don't. The tech determines your market.
☑ Trust blocks everything — Even perfect agents fail at checkout because users won't trust them with payments.
What architecture constraints are shaping your agent product? Would love to hear what you're learning 👇
♻️ Repost if you're making agent architecture decisions in 2025
#ProductManagement#AIAgents#ProductStrategy#AIProducts
We used to design for humans. Now we might be designing for their browsers 🌐
Two days ago, OpenAI dropped ChatGPT Atlas. It’s a browser that thinks with you. You can ask it to plan a trip, summarise research, or draft an email right inside the page you’re on.
And the timing isn’t random.
We’ve already seen Comet by Perplexity turning browsing into automation literally a small army of agents running tasks for you in the background.
Then there’s Dia from The Browser Company with slower, calmer, more design-driven, almost meditative compared to Comet’s chaos.
So now we have AI browsers that summarise, decide, and even act on our behalf.
And act is the most interesting for me.
I keep wondering: what happens when the browser becomes the user?
When your website isn’t being read by a person anymore but by an AI assistant interpreting it for them?
Do we start designing for machines that understand humans instead of humans using machines?
Will context and conversation replace clicks and navigation?
And how do we make sure these systems stay transparent when they literally see and decide everything for us?
I think this is the beginning of the next UX frontier where “user experience” expands to include the AI sitting between us and the web.
It’s both exciting and slightly terrifying.
When browsers start making decisions for us, the balance of power shifts dramatically.
If an AI assistant decides which websites to summarize or which sources to trust, the open web as we know it starts to shrink.
From a UX perspective, that’s both fascinating and worrying.
Because the “user experience” might soon depend less on how we design interfaces and more on how AI browsersinterpret them.
Imagine you’ve designed a product comparison page with thoughtful visuals, interactive filters, detailed specs.
The AI browser skims it, decides “too much noise,” and just tells the user: “This one’s best for you.”No exploration, no curiosity, no moment of discovery left.
If one or two companies own the assistant layer between humans and the web, say OpenAI with Atlas or Google with Gemini they essentially control what people see and how they experience it.
It’s no longer a web of equal pages; it’s a filtered landscape where one assistant mediates access to everything.
So yes, it’s exciting, the idea of browsers that understand context, automate tasks, and make the web more intuitive.
But it’s also a reminder that good design ethics will matter more than ever.
We need to make sure the intelligent layer between users and information doesn’t quietly become a gatekeeper.
What do you think, is this the start of a smarter, more intuitive web, or the slow disappearance of the open one?
I uploaded a new podcast episode of the Future of UX Podcast 🎙️about this topic. Link in the comments ✨
OpenAI just launched Atlas - rethinking what browsers do!
Atlas shifts browsers from "navigate pages" to "complete tasks."
What Shipped
1. AI sidebar on every page
Ask ChatGPT about any webpage without leaving it. Summarize, analyze, compare - no copy-paste-switch-tab cycles
2. Agent Mode (Plus/Pro/Business)
ChatGPT completes multi-step tasks: research, book appointments, order things, fill forms, etc. Shows every step, asks approval for important actions. Can't download files or execute code - automation with oversight
3. Browser Memories (optional)
Remembers context across sessions. Fully controllable - delete anytime, disable entirely,or use incognito mode
4. Privacy defaults
Browsing data not used for AI training. Full control over history and memories
Why this matters
1. Success = task completion, not pageviews
The browser understands your goal, not just your search query. Metrics shift from clicks to "did you finish what you came for?" (tasks completion)
2. Websites need agent-friendly design
Clean HTML, proper labels, good accessibility = better agent interaction. Complex JavaScript, CAPTCHAs, and canvas-heavy UIs will struggle
3. Discovery changes
The shift: users get answers in Atlas without visiting your site. You get cited, not clicked—"zero-click" discovery
4. The traffic problem
Stack Overflow saw 50% traffic drop post-ChatGPT. When AI answers directly, sites lose visits. Recipe sites, news publishers face the same challenge.
Three bigger implications:
1. Challenges Google's $200B search ad model
Conversational answers replace click-through results. This isn't just browser competition - it's business model disruption
2. Browsers become platforms
If Atlas adds payments and deep integrations, it's infrastructure - not just software. Less "Chrome alternative," more "SuperWebApp"
What to watch
1. Website blocking: Will sites block OAI-SearchBot? Mass blocking kills utility; allowing it kills traffic
2. Enterprise adoption: Will companies trust Agent Mode for internal workflows Privacy concerns vs. productivity gains.
3. Agent autonomy: Current design needs oversight. Pressure moves toward full autonomy - when do users trust unsupervised actions?
4. Analytics evolution: Pageviews matter less. New metric: task completion rate
Bottom line
With Atlas the endeavour is to understand your goals instead of just displaying pages.
If this becomes default, no reason why it shouldn’t, the web stack changes - how sites are built, content discovered, businesses monetize, privacy works.
Early. But directionally clear.
https://lnkd.in/g2ZRjnkb#ai#artificialintelligence#browsers#agenticai
While recording my latest YouTube video, I noticed something interesting: after reading a Dataverse plugin’s code, AI can 𝘴𝘮𝘰𝘰𝘵𝘭𝘺 𝘪𝘯𝘧𝘦𝘳 how that plugin should be registered (messages, steps, filtering, etc.). That made me wonder: 𝗶𝗳 𝗔𝗜 𝗰𝗮𝗻 𝗶𝗺𝗮𝗴𝗶𝗻𝗲 𝘁𝗵𝗲 𝗿𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻, 𝘄𝗵𝘆 𝗻𝗼𝘁 𝗴𝗶𝘃𝗲 𝗶𝘁 𝗮 𝘄𝗮𝘆 𝘁𝗼 𝗲𝘅𝗲𝗰𝘂𝘁𝗲 𝗶𝘁?
To explore this, noticing also how good is AI on running PACX commands, I extended 𝗣𝗔𝗖𝗫 by adding a brand new 𝗽𝗹𝘂𝗴𝗶𝗻𝘀 namespace.
It brings 𝗰𝗼𝗺𝗺𝗮𝗻𝗱𝘀 𝗱𝗲𝘀𝗶𝗴𝗻𝗲𝗱 𝘁𝗼 𝘀𝘁𝗿𝗲𝗮𝗺𝗹𝗶𝗻𝗲 𝗽𝗹𝘂𝗴𝗶𝗻 𝗹𝗶𝗳𝗲𝗰𝘆𝗰𝗹𝗲 𝗺𝗮𝗻𝗮𝗴𝗲𝗺𝗲𝗻𝘁 and make the process transparent enough for AI-assisted workflows.
In my latest blog post, I share why I added these commands, how they work, and how they can fit into everyday development or CI/CD pipelines.
👉 𝗖𝗵𝗲𝗰𝗸 𝗶𝘁 𝗼𝘂𝘁 𝗵𝗲𝗿𝗲:
https://lnkd.in/de-6r3uw
I’d love to hear your feedback, ideas, or even wild “what if AI could…” scenarios around plugin management.
#PowerPlatform#PowerApps#Dataverse#DevTools#AI#Productivity#mvpbuzz#pacx
Let me say this: AI is a powerful tool—not a replacement.
The reality: Only 2.5% of websites are built purely with AI.
AI-generated code without expert oversight often costs MORE through vulnerabilities, performance issues, and expensive fixes down the line.
The winning approach? Expert developers leveraging AI = 10x productivity with quality you can trust.
The question isn't whether to use AI—it's whether you're working with developers who know how to wield it strategically.
Read our insights by clicking on the link below. Learn on why human expertise still matters in AI-powered development.
https://lnkd.in/gE-qD5su#WebDevelopment#AIDevelopment#Magento2#EcommerceSolutions#TechStrategy
Software Testing and Accessibility Specialist
2moI agree with everything in your article. However, although it says it has been updated to include all 55 success criteria, 5 are missing - 1.4.3, 2.3.1, 2.5.1, 2.5.2 and 2.5.4. Your summary appears to suggest that 6 success criteria can be automated, but in the detailed analysis all success criteria have caveats. I would not disagree with the latter, and in some cases I think you have been over-generous. BTW, the second paragraph contains a typo - "additonal ".