Hacker Newsnew | past | comments | ask | show | jobs | submit | wastemaster's commentslogin

Hey HN. I'm the CTO at a startup building AI agents for hotels. Lately, I've been seeing a lot of hype around MCP being the "magic bullet" that will instantly make legacy systems AI-ready. After dealing with 15-year-old SOAP APIs and the reality of how LLMs handle multi-step database writes, I wrote down why I think MCP is great, but fundamentally misunderstood by the market. Happy to answer any questions about bridging LLMs with legacy monoliths.


Spot on. We often joke that a raw LLM acts exactly like an over-eager junior sales rep—it desperately wants to say "yes" to please the customer. Because they learned from us, they inherit the bad human habit of equating "helpfulness" with agreement. The difference is an AI will scale those broken promises instantly, which is why the constraints have to be architectural.

Happy to answer questions about failure modes, where we draw the line between retrieval and operational decisions, and which constraints actually reduced risk in production. The main lesson for us was that “don’t hallucinate” is too soft for real operations. We had to replace soft prompting with hard boundaries: verified data only, checks for critical actions, and escalation before collecting fulfillment details for anything unverified.

We deployed AI agents across 25 hospitality properties and logged ~46,000 guest conversations. The main failure mode wasn’t tone or retrieval. It was “confident gap-filling”: the model promising operational outcomes nobody had verified. This post is about the production failures we saw and the constraints we added to stop them.

Thanks — good call. For context, IVR is the "Press 1 for English, Press 2 for..." menu that most hotels use before routing calls. The key finding was that removing it made our metrics look worse, but actually revealed callers we were losing silently at the IVR stage.

Author here. We run AI voice agents across a European hotel network. This post covers 6 months of call data and one change (removing the IVR) that revealed something we didn't expect — our old phone system was silently filtering out callers before they ever reached the AI, making our metrics look better than they were. Happy to go deep on the voice pipeline, language detection, or anything else. Full product overview: https://github.com/polydom-ai/una-by-polydom

I’m the CTO of Polydom.ai. We built Una to fundamentally change how hospitality teams work by automating the entire first line of guest interaction.

Instead of a video or a pitch deck, the agent conducts its own live interview. You can "interview" the candidate to see how it handles operations across phone, web, and WhatsApp in real-time.

Why this matters: Una isn't just a chatbot; she's a digital employee that shields human staff from 100% of routine first-line noise—handling everything from initial booking inquiries to 3 AM guest requests.

Key Facts: * 24/7 First-Line: Fully handles calls, bookings, and guest requests across all channels. * Economics: $1/hour (AI) vs. ~$20/hour (Human staff). * Traction: Already running operations at properties like LeeGardens Orlando (135 rooms) and The President Hotel (142 rooms). * Deep Integration: Connects directly with PMS platforms like Mews, Apaleo, Clock, Guesty, and Hospitable.

I'm curious to hear feedback on the conversational quality and the shift toward autonomous first-line operations.


Got two gdpr complaints, so I added automated removal of uploaded photos and metadata extracted from those photos.

Now photos and data are only stored for 3 minuites, then deleted (app artitecture requires server side processing of uploaded file and to compare results with your photo I have to store it - to be able to show)

Also reflected information above on the website itself. Thank you for your interest to this project!


Well done. Does this apply to all images that have ever been uploaded to the website, or only to the ones uploaded after your update? If an user uploaded an image 2 hours ago, will their image be stored or has it been deleted already?


applies to every uploaded photo of this website, uploaded at any time


Nice, thanks for implementing this


yep needs some tuning // added some more resources to it


This is a good question. the thing is that except you photo I have to extract its vector of facial parameters to match fakes. So with this 128-dimentional vector I can find you on other photos (if I had some)

All the pics and their vectors stay on this small server. if you want to be deleted please drop me a letter wastemaster@gmail.com


> All the pics and their vectors stay on this small server.

This should be mentioned on the website.

> if you want to be deleted please drop me a letter wastemaster@gmail.com

There should be an easier mechanism to request a deletion of our photo. Better still, request permission from the user to store the photo in your servers before actually storing them.

I think this is the bare minimum of transparency that should required before letting people upload personal data, especially in this day and age.


Not to mention that the website is accessible from the EU, and you're required -by law- to obtain consent to store this personal data, and to tell people what exactly you're going to do with it, and with whom you're sharing it (if anyone).

I know everyone's used to the wild west but I'm glad that's changing, because of comments like yours - this transparency should NOT be something done out of the website owner's good heart (because as we've seen, most will just give us the finger), but enforced by law.

Edit: For the record, wastemaster's actually quite nice, and this is not directed at them, just websites in general.


How exactly would anyone in the EU prosecute someone outside of it for running their own website if that individual does so outside of the EU and does not have any organization or company they are affiliated with. Just because something is accessible from the EU does not make it under their jurisdiction to police.


The EU claims jurisdiction based on the fact that part of the interaction occurred in the EU, so they can fine you (it should be noted that the GDPR applies to data related to people in the EU, not related to EU citizens living elsewhere). Whether they can collect on those fines is a different matter.


How do they intend to fine non EU residents hosting a website outside of the EU? I could see if it was a company but if someone is running a server with a not for profit site on it with no way to identify the site owner and an EU resident visits it, good luck trying to fine anyone. The EU does not own or even control the internet outside of their borders.


After doing some research it appears that only businesses and organizations are responsible for compliance with GDPR


and you're required -by law-

Does the GDPR apply to non-commercial, non-business, non-organizations?


Yes. If the organisation/company/service is processing the data of the users, GDPR applies.

https://gdprexplained.eu/who-has-to-comply/


Why do you care what happens with a photo of your face? Many thousands of them exist; you probably have a profile photo on gravatar, or linkedin, or twitter somewhere anyway, to say nothing of the many thousands upon thousands of pictures of your face captured in frames on surveillance camera footage.

You provide this information (a picture of your face) to every convenience store, casino, bank, airport, and office building you walk into, many hundreds of times per day, for permanent storage. What is the threat model here from someone with a webserver having a single picture of your face with no other associated identifying information about it?


Agree, thank you. Somehow I missed that, yep going to add with next update


The fact that you don't want to immediately delete the data after processing is a cause for concern.

I can't think of any reason not to immediately delete the data, other than that you intend to use it for something else in the future.

That said, I appreciate your honesty. If you had actually nefarious intentions, you would presumably just claim that you deleted the data when you don't


I would honestly suggest just deleting the photo after use, there is such a big minefield with something like this and the EU


Can’t you just remove all the stored data?


Already doing this! Cleaning up all the uploaded info in 3 minutes


Why don't you delete them all automatically as soon as you've finished processing them?


Already doing this! Cleaning up all the uploaded info in 3 minutes after upload!


> All the pics and their vectors stay on this small server.

Not to take a jab at you specifically, but the fact that someone that can make websites like these is ignorant enough about privacy (law) to casually drop this line marks a worrying development in the accessibility of AI tech.

For impactful technologies, we probably want the required domain knowledge to come with some structural social disciplining so that we can collectively steer that impact in the right direction (whatever that is). Clearly AI libraries have become so easy to use professional ethics are not part of the curriculum.


>This is a good question. the thing is that except you photo I have to extract its vector of facial parameters to match fakes. So with this 128-dimentional vector I can find you on other photos (if I had some)

And - with all due respect - an alternative would be providing an open-source program to create offline this "128-dimentional vector" and upload only this latter and NOT the photo.


Uh why wouldn't they be automatically trashed after processing?


can you do the extraction client side, and only send the vector ?


Sending this vector is somewhat even worse than just photo. This vector info would allow someone to match your face without your initial photo! (array of 128 floats requires less storage and less transparence)

For example if I extract color map from your photo is this your personal data still?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: