IlikeKitties a day ago

Responsible Disclosures and their consequences have been a disaster for the human race. Companies need to feel a lot more pain a lot more often in order for them to take the security of their customers a lot more serious. If you just give them month to fix an issue and spoon-feed them the solution it's just another ticket in their Backlog. But if every other security issue becomes enough news online that their CEOs are involved and a solution must be find in hours not month, they will become a lot more proactive. Of course it's the end users that would suffer most from this. But then again, they buy ASUS so they suffer already...

  • jeroenhd 20 hours ago

    I think ASUS' turnaround time on this was quite good, I don't see the problem here. ASUS didn't deny the bug, didn't threaten to prosecute anyone for reverse engineering their software, and quickly patched their software. I have no doubt that before the days of responsible disclosure, this process would've taken months and might have involved the police.

    Normal people don't care about vulnerabilities. They use phones that haven't received updates in three years to do their finances. If you spam the news with CVEs, people will just get tired of hearing about how every company sucks and become apathetic once there's a real threat.

    The EU is working on a different solution. Stores are not permitted to sell products with known vulnerabilities under new cybersecurity regulations. That means if ASUS keeps fucking up, their motherboards become dead stock and stores won't want to sell their hardware anymore. That's not just computer hardware, but also smart fridges and smart washing machines. Discover a vulnerability in your dish washer and you may end up costing the dish washer industry millions in unusable stock if their vendors haven't bothered to add a way to update the firmware.

    • ycombinatrix 11 hours ago

      >They say “This issue is limited to motherboards and does not affect laptops, desktop computers”, however this affects any computer including desktops/laptops that have DriverHub installed

      >instead of them saying it allows for arbitrary/remote code execution they say it “may allow untrusted sources to affect system behaviour”.

      Sounds like Asus did in fact deny the bug.

    • buzer 16 hours ago

      > Stores are not permitted to sell products with known vulnerabilities under new cybersecurity regulations.

      What are the specifics on that? Like does the vulnerability need to be public or is it enough if just the vendor knows about it? Does everyone need to stop selling it right away if new vulnerability is discovered or do they some time patch it? I'm pretty sure software like Windows almost definitely has some unfixed vulnerabilities that Microsoft knows about and is in process of fixing every single day of the year. Currently even if they do have a fix, they would end up postponing it until next patch Tuesday.

      And what even is "vulnerability" in this context? Remote RCE? DRM bypass?

    • Polizeiposaune 9 hours ago

      "Stores are not permitted to sell products with known vulnerabilities under new cybersecurity regulations."

      Do stores have to patch known vulnerabilities before releasing the product to customers or can customers install the patch?

      • aspenmayer 2 hours ago

        Stores don’t have the capability to do this. These aren’t car dealerships we’re talking about here, more like Walmart or Best Buy. It would take a recall/RMA or online firmware updates, both of which already exist and are widely used.

  • holowoodman a day ago

    "Responsible" disclosure is paradoxically named because actually it is completely irresponsible. The vast majority of corporations handle disclosures badly in that they do not fix in time (i.e. a week), do not attribute properly, do not inform their users and do not learn from their mistakes. Irresponsibly delayed limited disclosure reinforces those behaviors.

    The actually responsible thing to do is to disclose immediately, fully and publically (and maybe anonymously to protect yourself). Only after the affected company has repeatedly demonstrated that they do react properly, they might earn the right for a very time-limited heads-up of say 5 work days or something.

    That irresponsibly delayed limited disclosure is even called "responsible disclosure" is an instance of newspeak.

    • stavros a day ago

      I make software. If you discover a vulnerability, why would you put my tens of thousands of users at risk, instead of emailing me and have the vulnerability fixed in an hour before disclosing?

      I get that companies sit on vulnerabilities, but isn't fair warning... fair?

      • ang_cire a day ago

        > why would you put my tens of thousands of users at risk, instead of emailing me and have the vulnerability fixed in an hour before disclosing

        You've got it backwards.

        The vuln exists, so the users are already at risk; you don't know who else knows about the vuln, besides the people who reported it.

        Disclosing as soon as known means your customers can decide for themselves what action they want to take. Maybe they wait for you, maybe they kill the service temporarily, maybe they kill it permanently. That's their choice to make.

        Denying your customers information until you've had time to fix the vuln, is really just about taking away their agency in order to protect your company's bottom line, by not letting them know they're at risk until you can say, "but we fixed it already, so you don't need to stop using us to secure yourself, just update!"

        • renmillar 21 hours ago

          You're making an assumption that doesn't match reality - vulnerability discovery doesn't work like some efficient market. Yes, intelligence agencies and sophisticated criminal groups might find 0-days, but they typically target selectively, not deploying exploits universally.

          The real threat comes from the vast number of opportunistic attackers who lack the skills to discover vulnerabilities themselves but are perfectly capable of weaponizing public disclosures and proof-of-concepts. These bottom-feeders represent a much larger attack surface that only materializes after public disclosure.

          Responsible disclosure gives vendors time to patch before this larger wave of attackers gets access to the vulnerability information. It's not about protecting company reputation - it's about minimizing the window of mass exploitation.

          Timing the disclosure to match the fix release is actually the most practical approach for everyone involved. It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.

          Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability. Providing the fix simultaneously with disclosure allows for orderly patch deployment without service interruption.

          This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.

          • ang_cire 20 hours ago

            I understand the arguments for the current system, I just don't agree that disruption is worse than loss of agency. Your position inevitably ends up arguing for a paternalistic approach, as you are when you say

            > It eliminates the difficult choice customers would otherwise face - either disrupt their service entirely or knowingly remain vulnerable.

            You decided they are better off not having to make that choice, so you make it for them whether they like it or not.

            In fact, you made the worst choice for them, because you chose that they'd remain unknowingly vulnerable, so they can't even put in temporary mitigations or extra monitoring, or know to be on the lookout for anything strange.

            > Most organizations simply can't afford the downtime from abruptly cutting off a service, nor can they accept the risk of continuing with a known vulnerability.

            Now this is an interesting part, because the first half is true depending on the service, but bad (that's a BCDR or internet outage issue waiting to happen), and the second half is just wrong (show me a company that doesn't know and accept that they have past-SLA vulns unpatched, criticals included, and I'll show you a company that's lying either to themselves or their customers).

            > This coordinated approach minimizes disruption while still addressing the security issue - a balanced solution that protects both the security and continuity needs of end users.

            This is not a balanced approach, this is a lowest-common-denominator approach that favors service providers over service users. You don't know if it protects someone's security needs, because people have different security needs: a journalist being targeted by a state actor can have the same iphone as someone's retired grandma, or infotainment system, or home assistant, etc.

            I've managed bug bounty and unpaid disclosure programs, professionally, and I know firsthand that it's the company's interests that responsible disclosure serves, first and foremost.

          • rvnx 20 hours ago

            Let’s imagine you found how to steal funds from a bank, best is to let them know that you are concerned (as a customer) for the safety of your own funds.

            If they do nothing after a reasonable amount of time, escalate to regulators or change bank. Then once they release information that some processes are changed: “thanks to XXX working at YYY for helping us during it”. You win, they win, clients win, everybody wins.

            Unwanted public disclosure directly leads to public exploitation, there is nothing good at all about it.

            For example, there is a RCE in Discord (totally statistically certain due to the rendering engine, just not public yet), and this is going to be exploited only if someone shares the technical details.

            If you don’t disclose it, it’s not like someone else will discover it tomorrow. It’s possible, but not more likely than it was yesterday. If you disclose it, you make sure that everybody with malicious intent knows about it.

            • leoqa 19 hours ago

              A middle ground: announce that Discord is insecure and you’ve found a zero-day. Perhaps a trusted 3rd party exists that can attest publicly (Mitre?) after you show a demo.

              Then customers are aware, Discord is pressured to act/shamed, and then you proceed with your private disclosure with a window.

              • ang_cire 19 hours ago

                Yep. People keep pushing this false dichotomy that it's either company-directed 'responsible disclosure', or it's "release full working POC and complete writeup publicly, immediately", and there's no middle ground.

                Yes, limited disclosure will make people start hunting for the vuln, but it's still more than enough time for me to revoke an API key, lock down an internet-facing service, turn off my Alexa (no, I don't/won't own one), uninstall the app, etc. And it's better than me not knowing, and someone is intruding into my system in the meantime.

                • holowoodman 16 hours ago

                  Knowing a half-truth is as bad as knowing nothing. Half the time I will do useless mitigations because actually I would have been unaffected. The other half I will do the wrong thing because of incomplete reporting.

                  • ang_cire 11 hours ago

                    This is true of even disclosures with all information available.

                    I can't count how many people did incorrect or unnecessary fixes for log4shell, even months after it was disclosed.

              • holowoodman 16 hours ago

                That is useless, because of the tons of sleazy CVE-collectors. They will always announce the next heartbleed, details soon. When the details are out, total nothingburger, useless mitigation recommendations, incomplete report, misreported scope, different attack vectors, I've seen everything. It only feeds the CVE hype cycle to no use of the customers, victims and public.

            • holowoodman 16 hours ago

              You report that to the bank, the bank pays off you and the robbers to keep things quiet. 5 years later, things are discovered and you go to jail for aiding and abetting.

              Or you report immediately to the press, press reports, police secures bank building, investigates sloppy practices, customers win, you are a hero, inept banksters and robbers go to jail.

        • fastball 21 hours ago

          Increasing the chance of a bad actor actually doing something with a vulnerability seems bad, actually. You're effectively shifting responsibility to consumers, who are probably not going to see a CVE for one of the dozens of softwares they use every day.

          • ang_cire 21 hours ago

            > You're effectively shifting responsibility to consumers, who are probably not going to see a CVE for one of the dozens of softwares they use every day.

            Which is again, a problem created by the companies themselves. The way this should work is that the researcher discloses to the company, and the company reaches out to and informs their customers immediately. Then they fix it.

            But instead companies refuse to tell their customers when they're at risk, and make it out to be the researchers that are endangering people, when those researchers don't wait on an arbitrary, open-ended future date.

            > Increasing the chance of a bad actor actually doing something with a vulnerability seems bad, actually.

            Unless you know who knows what already, this is unprovable supposition (it could already be being exploited in the wild), and the arguments about whether POC code is good or bad is well tread, and covers this question.

            You are just making the argument that obscurity is security, and it's not.

            • layer8 19 hours ago

              > The way this should work is that the researcher discloses to the company, and the company reaches out to and informs their customers immediately. Then they fix it.

              If that was common practice, bad actors would make sure to be a registered customer of all interesting targets, so that they get informed early about vulnerabilities before there is a fix. And it would create a black market for that information.

              When someone gets the information “Asus BIOS has an RCE vulnerability related to driver installation”, they’ll be able to figure out the details quickly with high probability, like OP did.

            • fastball 21 hours ago

              You are shopping at a store along with some other customers. When entering the store, you notice that an employee of the store has left a large knife outside, under a trashcan. A shady character is wandering around the store, looking for someone to steal from, but hasn't figured out the right angle of attack yet. At some point, you (ever the responsible citizen) stand up on a table in the store and yell "Hey! Just wanted to let everyone know that there is a large, scary looking knife under the trash can outside. You have been warned." You then climb down from the table and leave the store. Knives are dangerous, after all. Immediately after your announcement the shady character goes and grabs the knife, which they then use to stab a customer on their way out of the store and steal their stuff. Unfortunately the customer didn't hear your announcement about the impending danger because they were in the toilet at the time.

              Whew, thank god for public disclosure with no prior warning to the people who would've been best equipped to retrieve their knife.

              ---

              This was clearly not the best way to handle the situation.

              Sure, you didn't know that the thief was unaware of the knife before your announcement, but he sure as shit was aware afterwards. You not knowing what they know is not a good reason to indiscriminately yell to no one in particular.

              I did not make the argument that obscurity is security. The knife being under a trashcan is a risk and should be addressed by management. But that doesn't mean non-obscurity automatically improves security.

              • ang_cire 20 hours ago

                A better analogy would be if you see a bunch of people walking around in faulty stab vests, and you tell them that the vests are faulty before they are recalled and replaced by the company. In which case, telling everyone those vests are actually not going to stop a knife, is a very good thing to do.

                > I did not make the argument that obscurity is security... But that doesn't mean non-obscurity automatically improves security.

                ... egad. Yes, having information doesn't mean people will do the right thing with it, but you're not everyone's mommy/god/guardian. People should have the choice themselves about what actions they want to take, and what's in their own best interests.

                And obscuring the information that they need to make that choice, in the name of not making them less secure, is, ipso facto, asserting that the obscuring is keeping them more secure than they otherwise might be.

                So yes, you absolutely are arguing for obscurity as security.

                • fastball 20 hours ago

                  Sure, we can run with your analogy. So you make everyone aware that the stab vests are faulty. One of the people you make aware of this fact is a thief with a knife, who previously wasn't gonna take the risk on robbing anyone, since he only had a knife (not a gun) and everyone was wearing stab proof vests. But now he knows, so he goes for it and stabs someone. You are partially responsible for this outcome in this hypothetical scenario, as the thief didn't know beforehand about the defect and the only reason he ended up stabbing someone was due to this knowledge. Again, you not knowing whether or not the thief already knows does not excuse you if he did not and now does through your actions.

                  I'm arguing that unveiling the obscurity can lead to attacks that wouldn't have happened otherwise, and you are partially to blame for those if they happen (which is true). I am not saying it was "more secure" before the disclosure. Just that, in the world afterwards, you must take responsibility for everyone knowing, including people who did not know before and abuse that knowledge.

                  • ang_cire 20 hours ago

                    > But now he knows, so he goes for it and stabs someone.

                    Except his old knife he already had with him isn't made for exploiting the flaw in the vest, so it doesn't work. He needs to go home and build a new one, and the people in the mall can go home before he comes back, now that they know their vests are flawed. Otherwise, someone who comes in and is aware of the flaw when the users are not, can stab everyone, and they'd have no clue they were vulnerable.

                    In real-world terms, the kind of mass-exploitation that people use to fear monger about disclosure already happens everyday, and most people don't notice. The script kid installing a monero miner on your server should not be driving the conversation, it should be the IC spook recording a journalist/ dissident/ etc.

                    > Just that, in the world afterwards, you must take responsibility for everyone knowing, including people who did not know before and abuse that knowledge.

                    This is just a generalized argument for censorship of knowledge. Yes, humans can use knowledge to do bad things. No, that does not justify hiding information. No, that does not make librarians/ researchers/ teachers responsible for the actions of those that learn from them.

                    • haswell 19 hours ago

                      > Except his old knife he already had with him isn't made for exploiting the flaw in the vest, so it doesn't work.

                      This seems like an unnecessary constraint to bolster your point instead of actually addressing what the other person is saying.

                      In this analogy, why can’t the old knife exploit the flaw? If the problem with the vest allows a sharp implement through the material when inserted at the correct angle or in the correct place, any sharp object should do.

                      To bring this back to the real world, this is all unfolding in virtual/digital spaces. The attacker doesn’t need to physically go anywhere, nor can potential victims easily leave the store in many cases. And the attacker often needs very little time to start causing harm thanks to the landscape of tools available today.

              • walls 15 hours ago

                Instead we get this version:

                You are shopping at a store along with some other customers. When entering the store, you notice a gun laying on the ground by the door. You keep coming back every week, pointing it out, asking if that's intended or not.

                They continue to ignore you, or explain how it's intended; a good thing even!

                Eventually someone with malicious intent also sees the gun, picks it up, shoots a fellow customer, puts it back where it was, and walks off.

                By the next day, miraculously, management will have found the time and resources to remove the gun.

                • fastball 5 hours ago

                  Agreed, that is what often happens. But after seeing this pattern before, that does not mean the solution going forward is to yell "hey everyone there is a gun" and hope management gets to it before the person with malicious intent.

                  Sure, maybe management will ignore you if you tell them about the gun privately. At that point, feel free to disclose publicly. But they are guaranteed to not do anything if they don't know about it and you don't tell them (before telling everyone else including bad actors).

            • strken 21 hours ago

              Why should it work that way? Disclosing the vuln before fixing it seems like a surefire way for my mum to lose her life's savings. Why do you hate my mum so much?

              • pixl97 17 hours ago

                Why not turn this around?

                Why do the companies that make the software hate your mom so much they push out release after release of shit? We're all fine with these developers crapping on the floor as long as we give them 30 days to clean up their steaming pile.

                If instead every release was capable of instantly ruining someone's life, maybe we'd be more capable of releasing secure software and judging what software is secure.

        • lelanthran 18 hours ago

          I disagree. The vast majority of script kiddies don't know about the zero day.

          Instead of just one bad actor using that vulnerability on Andrew select targets, your proposal will have a few tens of thousands bots performing drive by attacks on millions of victims.

      • neilv 20 hours ago

        I think one point being made is that (in this example) you would've been much less careless about shipping the vulnerability, if you knew you'd be held accountable for it.

        With current practice, you can be as sloppy and reckless as you want, and when you create vulnerabilities because of that, you somehow almost push the "responsibility" onto the person who discovers it, and you aren't discouraged from recklessness.

        Personally, I think we need to keep the good part of responsible disclosure, but also phase in real penalties for the parties responsible for creating vulnerabilities that are exploited.

        (A separate matter is the responsibility of parties that exploit the vulnerabilities. Some of those may warrant stronger criminal-judicial or military responses than they appear to receive.)

        Ideal is a societal culture of responsibility, but in the US in some ways we've been conditioning people to be antisocial for decades, including by elevating some of the most greedy and arrogant to role models.

        • haswell 19 hours ago

          > you would've been much less careless about shipping the vulnerability, if you knew you'd be held accountable for it

          I have a problem with this framing. Sure, some vulnerabilities are the result of recklessness, and there’s clearly a problem to be solved when it comes to companies shipping obviously shoddy code.

          But many vulnerabilities happen despite great care being taken to ship quality code. It is unfortunately the nature of the beast. A sufficiently complex system will result in vulnerabilities even a careful person could not have predicted.

          To me, the issue is that software now runs the world, despite these inherent limitations of human developers and the process of software development. It’s deployed in ever more critical situations, despite the industry not having well defined and enforceable standards like you’d find in some engineering disciplines.

          What you’re describing is a scenario that would force developers to just stop making software, on top of putting significantly more people at risk.

          I still believe the industry has a problem that needs to be solved, and it needs a broad culture shift in the dev community, but disagree that shining a bright light on every hole such that it causes massive harm to “make devs accountable” is a good or even reasonable solution.

          • pixl97 17 hours ago

            >What you’re describing is a scenario that would force developers to just stop making software, on top of putting significantly more people at risk.

            Good. I work in code security/SBOM, the amount of shit software from entities that should otherwise be creating secure software should worry you.

            Businesses care very little about security and far more about pushing the new feature fast. And why not, there is no real penalty for it.

            • haswell 16 hours ago

              What is your position on open source projects? Should someone who writes software in their spare time who decides to share it publicly be forced to stop doing so?

              I’m more open to harsher limits on commercial software, especially in certain categories. But underneath all of this we’re discussing an ecosystem and a culture which can’t be cleanly separated.

              Some of the binary thinking I see in this thread would be deeply damaging to parts of that ecosystem with potentially major unintended consequence. Open source software is critically important for human rights/freedom. Taken at face value, many of the comments here directly threaten that freedom.

              I’m not assuming that’s your stance, but I’m curious how you see the open source aspect of this considering how significant its role is - especially in the security space.

              • pixl97 15 hours ago

                I don't have the answer here. Open source is the base of a lot of secure software. And at the same time open source software gets pulled into other functional software that has wide spread and potentially dangerous outcomes.

                OpenSSL for example. Any security flaw in this package has worldwide effects, but we would be lessor without it.

                Another example is the xz software that was attacked and then pulled into distributions. We were just lucky it was caught relatively early.

                • haswell 11 hours ago

                  Therein lies the rub. Whatever the answer is, it will require careful and thoughtful solutions, not oversimplified conclusions that raking developers over the coals publicly with no warning is somehow “Good”.

                  To be clear, I have far less sympathy for big software shops that pump out negligently bad code and then have to be prodded to fix it, but they’re not the only players involved.

          • neilv 18 hours ago

            I think that culture shift will have to come from the top in business -- the CEO and the board.

            At this point, the software development field is about operating within the system decided by those others, with the goal of personally getting money.

            After you've made the CEO and board accountable, I think dev culture will adapt almost immediately.

            Beware of attempts to push engineering licensing or certifications, etc. as a solution here. Based on everything we've seen in the field in recent decades, that will just be used at the corporate level as a compliance letter-but-not-spirit tool to evade responsibility (as well as a moat to upstart competitors), and a vendor market opportunity for incompetent leeches.

            First you make CEO and board accountable, and then let the dev culture change, and then, once you have a culture of people taking responsibility, then you'll have the foundation to add in licensing (designed in good faith) as an extra check on that, if that looks worthwhile.

          • fulafel 16 hours ago

            > A sufficiently complex system will result in vulnerabilities even a careful person could not have predicted.

            I think as a field we're actually reasonably good at quantifying most of these risks and applying practices to reduce the risk. Once in a blue moon you do have "didn't see that coming" cases but those cause a very minor part of the damage that people suffer because of sw vulnerabilities. Most harm is caused by classes of vulnerabilities that are boringly pedestrian.

      • v3ss0n 36 minutes ago

        You made the software, you have your paid customers, you are responsible for security of your customers. If you have an RCE that's your problem and you gotta fix it.

      • technion a day ago

        The problem with a fair warning is that once I email you such a warning, I'll never be able to anonymously publish it no matter how much you ignore the report. Then the fair thing becomes I never go public I'm confident you'll call lawyers.

        • SahAssar a day ago

          So can't you disclose it anonymously? I'm pretty sure most people who are savvy enough to find zero-days know how to get an email address anonymously.

          • technion a day ago

            All ill say is: try it in practice. You'll quick find it dismissed as "not professional" and people will quickly claim its "irresponsible" for that reason.

            • layer8 19 hours ago

              Why would you care, if you publish it anonymously?

        • frainfreeze a day ago

          Can't you just send it from anon email?

      • beeflet 7 hours ago

        Because there is an information disparity I could profit from instead of doing free work for you. Even if that disparity is just "posting the vuln to my blog" to get e-famous.

      • rakoo 19 hours ago

        According to the post above, if you earned enough reputation then you might be given that one-hour window for fixing before disclosing. The issue isn't so much about whether or not there should be a "private" window but how long it lasts, especially when the editor is a multi-billion company

        • haswell 19 hours ago

          Let’s not forget the end users in this scenario, who will not be able to react to this as quickly as a billion dollar company regardless of how well they notify their customers.

          • rakoo 17 hours ago

            Absolutely, which is yet another reason why this abstraction from the conditions of creation of anything tech-related is something that should be eliminated

      • efdee 19 hours ago

        Strange wording. You are the one that put tens of thousands of your users at risk. Not the one who discovers the problem.

        • stavros 19 hours ago

          If you forget your shop's door open after hours, and someone starts shouting "HEY GUYS! THIS DOOR IS OPEN! LOOK!", I have a hard time putting 100% of the blame on you.

          • pixl97 17 hours ago

            If I point out the bridge is cracking and you get angry about it, I'm blaming the idiots that engineered a crap bridge and didn't maintain it.

            Maybe it's time we get professional standards if this is how we are going to behave?

            • haswell 16 hours ago

              This seems like a fallacious analogy to me.

              Why is a cracked bridge dangerous? Because anyone traveling over it or under it is at risk of being hurt if the bridge collapses. Warning people that it is cracking does not increase the likelihood of a collapse.

              Why is a software vulnerability dangerous? Because anyone who knows about it and has nefarious intent can now use it as a weapon against those who are using the vulnerable software, and the world is full of malicious actors actively seeking new avenues to carry out attacks.

              And there are quite a few people who would exploit the knowledge of an unlocked door if given the chance.

              There’s a very clear difference in the implications between these scenarios.

              • pixl97 15 hours ago

                A cracked bridge is always dangerous.

                A vulnerable piece of software is always dangerous.

                There are large numbers of state funded exploit groups and otherwise blackhat organizations that find and store these vulnerabilities waiting for the right opportunity, say economic warfare.

                Much like building safe bridges from the start we need the same ideology in software. The 'we can always patch it later' is eventually going to screw us over hard.

                • haswell 11 hours ago

                  I agree with the conclusion that we need safer software from the start.

                  But we also have to deal with the reality of the situation in front of us.

                  I will maintain that the differences between the implications of revealing a crack in a bridge vs. prematurely revealing a vulnerability to literally the entire world are stark. I find it pretty problematic to continue comparing them and a rather poor analogy.

                  > There are large numbers of state funded exploit groups and otherwise blackhat organizations that find and store these vulnerabilities

                  This underscores my point. What you’ve been describing is a scenario in which those organizations are handed new ammunition for free (assuming they don’t already have the vuln in their catalog).

      • holowoodman 17 hours ago

        Fair warning through "responsible" disclosure was abused again and again and again. Why should I trust company no 1000 after 999 have mislead bug reporters, the public, their customers and the rest of the world about their own "just an hour"?

      • cenamus 18 hours ago

        You already put your tens of thousands of users at risk. The people putting bugs in the software, not the ones discovering them.

        • stavros 17 hours ago

          Please enlighten me on how you've managed to never write any bugs.

          • cenamus an hour ago

            Didn't say that. But I can't blame the ones publicising the bugs we put in there.

          • pixl97 17 hours ago

            Well, not sure DJB posts here, but he has kept it to a minimum.

            And this is mostly BS too. People don't write bug free software, they write features.

            Other industries had to license professional engineers to keep this kind of crap from being a regular issue.

            • holowoodman 16 hours ago

              "Licensed professional engineers" are a software-development myth.

              If all our software was as simple as a bridge, then we could have that. A bridge is 5 sheets of plans, 10 pages of founding checks, 30 pages of calculations, 100 pages of material specs. You can read all those in a day. Check the calculations in a week. Next bridge will be almost the same.

              Now tell me about any software where the spec is that short and simple. /bin/cat? /bin/true? Certainly not the GNU versions of those.

              Software is different because we don't build 1000 almost-identical bridges with low complexity. We always build something new and bespoke, with extremely high complexity compared to any kind of building or infrastructure. Reproduction is automatic, so there will never be routine. Totally different kind of job, where a licensed professional will not help at all.

              • pixl97 15 hours ago

                I hate to be dismissive, but tired old meme is tired.

                With what I do I work with a lot of larger companies and get to see the crap they push out with no architectural design and no initial security posture. I see apps with thousands of packages, including things like typosquats. I see the quality of the security teams which are contractors following checklists with no idea what they mean.

                Saying that actual professions would make no difference sounds insane to me. Again, to me, it sounds like every other industry in saying 'self regulation is fine, we're special, we'll manage ourselves".

                • holowoodman 10 hours ago

                  No. Licensed professionals are the engineering checklist people. "Not my fault, wasn't on the checklist, I've used the official approved one".

                  Licensed professionals checked a dam built by licensed professionals. Dam broke, killed people. Everyone claims to be innocent and the other party didn't read the right reports or didn't report the right problems: https://www.ecchr.eu/fileadmin/Fallbeschreibungen/Case_Repor... It is all just another method of shifting blame.

                  What really helps more than prescriptive regulation is liability. As soon as there is a strict liability for software companies, things will get better. What could also help is mandatory insurance for software producers. Then the insurance companies will either charge them big bucks or demand proof of safety and security.

              • Jap2-0 8 hours ago

                > We always build something new and bespoke, with extremely high complexity compared to any kind of building or infrastructure.

                Maybe this is part of the problem?

      • JonChesterfield a day ago

        An hour, sure. Frequently companies sit on it for months.

        • stavros a day ago

          Yes but responsible disclosure should be "you have a week (or whatever) from my first email, then I go public".

          • chii a day ago

            what if the vulnerability cannot be easily fixed within the week, even if the company stops all work and focus completely on the problem?

            If the reason for responsible disclosure is to ensure that no members of the public is harmed as a result of said disclosure, should it not be a conversation between the security researcher and the company?

            The security researcher should have an approx. idea of how or what to do to fix, and give a reasonable amount of time for a fix. If the fix ought to have been easy, then a short time should suffice, and vice versa.

            • stavros a day ago

              If the vulnerability can't be fixed within the week, maybe the company should be SOL. This will incentivize companies to build their software better, as they'll know that any vulnerability that is hard to fix will mean consequences.

              Maybe the mitigation is for the company to take its service down while it works on the problem. Again, a good incentive to avoid that in the first place. Also an incentive to not waste any time after a report comes in, to see and act on it immediately, etc.

              At some point, we have to balance customer risk from disclosing immediately with companies sitting on vulnerabilities for months, vulnerabilities that may be actively exploited.

              • dijit a day ago

                I hear what you're saying and I agree, but it's perhaps too black and white.

                Let's take one of the most disastrous bugs in recent history: meltdown.

                Speculative execution attacks inside the CPU. This required (in Paul Turners words): putting a warehouse of trampolines around an overly energetic 7-year old.

                This, understandably took a lot of time, both for microcode and OS vendors.. it took even longer to fix it in silicone.

                Not everyone is running SaaS that can deploy silently, or runs a patch cadence that can be triggered in minutes.

                I work in AAA games and I'm biased, we have to pass special certifications to release patches, if your publisher has good relations, waiting for CERT by itself (after you have a validated fix) is 2 weeks.

                • holowoodman 10 hours ago

                  Spectre/Meltdown is the perfect example of a vendor, Intel and AMD, deflecting blame onto the OS and software producers, successfully avoiding a recall, avoiding refunds for decreased performance and avoiding most of the blame.

                  What actually should have happened there is a full recall of all affected hardware. Microcode fixes and payments for lost performance in the mean time, until the new hardware arrives.

                  Meltdown was a desaster, but not only because the bugs themselves were bad. But also especially because we let Intel and AMD get away scott free.

                  • chii 4 hours ago

                    There is no world in which a recall (and/or a refund) is ever possible.

                    Until it is demonstrated that such flaws are a life and death fault, no regulation is possible for such flaws (unlike cars - which do have such recalls for faults that have life and death implications).

                • precommunicator 21 hours ago

                  > waiting for CERT by itself (after you have a validated fix) is 2 weeks

                  If the industry practice would be few days to disclosure just maybe those practices might change or maybe there would be a (extra paid) option to skip the line for urgent stuff.

              • worthless-trash 19 hours ago

                And when its an OS company and the test suites take a week to run (really) ?

                Dev time + test time + upload to cdn , is often longer than a week.

                • stavros 19 hours ago

                  You know, airlines also had a ton of excuses for not making air travel so safe, it's expensive, takes a while, do you know how long these things take, etc.

                  Still, they did it, because we decided safety is important to us.

                  • worthless-trash 7 hours ago

                    Airlines also get to control how equipment is used and have clear controlled deployments. Os vendors do not.

            • NegativeK 20 hours ago

              For any timeline the company can't hit, whether it's a week or 90 days, they should come up with compensating controls, detections, etc that users can implement. Managing vulnerable software isn't a new science.

              > The security researcher should have an approx. idea of how or what to do to fix

              Any expectation put on the security researcher beyond "maybe don't cause unnecessary shit storms with zero days" needs to be met with an offer of a fat contract.

            • throw0101d 21 hours ago

              > what if the vulnerability cannot be easily fixed within the week, even if the company stops all work and focus completely on the problem?

              A week is an example and not a definitive value dictated by law, statute, or regulation.

              When you report the vulnerability you give the developer a timeline of your plans, and if they can't make the deadline they can come back to you and request more time.

              • freeopinion 19 hours ago

                This back and forth is not possible if the researcher is anonymous. And it places all power in the hands of the developer. If the developer says, "I need a year" but the researcher doesn't give them a year, then the developer sues? Or files a criminal complaint? Why is all the risk on the researcher?

                So a gunshy researcher stays anonymous to keep their risk lower. They craft a disclosure with a crypto signature. They wait for the developer to post a public announcement about the disclosure that doesn't expose a ton of detail but does include the signature hash and general guidance about what to do until a fix is released.

                The researcher then posts their own anonymous public announcement with however much detail they choose. They might wait 24 hours or 7 days or 7 months. They might make multiple announcements with increasing levels of detail. Each announcement includes the original hash.

                Anybody can now make an announcement at any time about the vulnerability. If an announcement is signed by the same key as the original and contains more detail than given by the developer, the public can argue back and forth about who is being more or less responsible.

                Now the researcher can negotiate with the developer anonymously and publicly. The researcher can claim a bounty if they ever feel safe enough to publicly prove they are the author of the original report.

                Developers who routinely demonstrate responsible disclosure can earn the trust of researchers. Individual researchers get to decide how much they trust and how patient they are willing to be. The public gets to critique after the fact whether they sympathize more with the developer or the researcher. Perhaps a jury can decide which was liable for the level of disclosure they each pursued.

              • chii 21 hours ago

                this is what i presume happens today. You have a date for which disclosure will happen, and the company can request for more time.

                And this is exactly what the parent poster is against - because it is possible to continuously extend this date.

            • delusional a day ago

              The security researcher should have an approx. idea of how or what to do to fix.

              How is that in any way the responsibility of independent randos on the internet?

              If you truly believe these issues should be fixed, the right answer would be to hold companies accountable for timely security patches, overseen and managed by a government department.

              I'm not sure thats a good idea, but expecting random security researchers to somehow hold massive billion dollar Enterprises accountable is silly.

            • bbarnett a day ago

              You can always have a conversation if they provide justification.

        • bbarnett a day ago

          Many types of vulnerabilities cannot be resolved in one hour. Some require complex thought to resolved.

          One hour is absurd for another reason, what timezone are you in? And they? What country, and therefore, is it a holiday?

          You may say "but vulnerability", and yes. 100% no heel dragging.

          But all companies are not staffed with 100k devs, and a few days, a week is a balance between letting every script kiddie know, and the potenital that it may be exploited in the wild currently.

          If one is going to counter unreasonable stupidity, use reasonable sensibility. One hour is the same as no warning.

    • giantg2 a day ago

      That's because nobody actually cares about security nor do they want to pay for it. I'm a security champion at my company and security related work gets pushed off as much as possible to focus on feature work. If we actually wanted security to be a priority, they would employ security champions who's only job was to work on security aspects of the system instead of trying to balance security and feature work, because feature work will always prevail.

    • Retr0id 20 hours ago

      It's such a loaded term that I refuse to use it. "vendor-coordinated disclosure" is a much better term, imho

      (and in the world of FOSS you might have "maintainer-coordinated" too)

    • rfl890 17 hours ago

      What about damage control? I would argue your "anonymous, immediate disclosure" to the public (filled with bad actors) would be rubbing salt in the wound (allow more people to exploit the vulnerability before it's fixed). That's why nobody publishes writeups before the vuln is fixed. Even if corporations don't fix vulns in time, I can only see harm being done from not privately reporting them.

      • pixl97 16 hours ago

        >I can only see harm being done from not privately reporting them

        Because you need to take a look at the fuller picture. If every vuln was published immediately the entire industry would need to be designed differently. We wouldn't push features at a hundred miles per hour but instead have pipelines more optimized for security and correctness.

        There is almost no downside currently for me to write insecure shit, someone else will debug it for me and I'll have months to fix it.

    • IlikeKitties a day ago

      I mean, to be a bit more reasonable, there's a middle ground here. Maybe disclosing a massive RCE Vulnerability in software used by a lot of companies on 25th of December is not a good Idea. And perhaps an Open Source Dev with a security@project mail deserves a tad more help and patience than a megacorp with a record of shitty security management. And if you are a company that takes security serious and is responsive to security researchers inquiries they deserve at least the chance to fix it fast and before it becomes public.

      It's just that there are some companies EVERYONE knows are shitty. ASUS is one of them.

      • holowoodman a day ago

        You are right about open source developers who do this on the side, as a hobby, and even if they don't are usually underpaid and understaffed. They do deserve more time and a different approach.

        But corporations making big bucks from their software need to be able to fix things quickly. They took money for their software, so it is their responsibility. If they cannot react on a public holiday, tough luck. Just look at their payment terms. Do they want their money within 30 days or 25 work days? Usually it is the former, they don't care about your holidays, so why should anyone care about theirs? Also, the bad guys don't care about their victims' holidays. You are just giving them extra time to exploit. The only valid argument would be that the victims might not be reading the news about your disclosure on a holiday. But since you are again arguing about software used by a lot of companies (as opposed to private users), I don't see a problem there. They also have their guards on duty and their maintenance staff on call for a broken pipe or something.

        What's most important is that I'm saying we should revert the "benefit of the doubt". A vast majority of corporations have shitty security handling. Even the likes of Google talk big with their 90 day time window from private irresponsible disclosure to public disclosure. And even Google regularly fails to fix things within those 90 days. So the default must be immediate public and full disclosure. Only when companies have proven their worth by correctly reacting to a number of those, then they can be given the "benefit of the doubt" and a heads up.

        Because otherwise, when the default is irresponsible private disclosure, they will never have any incentive to get better. Their users will always be in danger unknowingly. The market will not have information to decide whether to continue buying from them. The situation will only get worse.

        • throw0101d 21 hours ago

          > But corporations making big bucks from their software need to be able to fix things quickly. They took money for their software, so it is their responsibility. If they cannot react on a public holiday, tough luck.

          Because it is not corporations who are reacting on public holidays, but developer human beings.

          It is not corporations that are reacting to install patches on a Friday, but us sysadmins who are human beings.

          • holowoodman 16 hours ago

            Companies will act out of greed and use their customers and developers as "human shields" to get out of their responsibility. Your on-call duty should be paid by the hour just as any duty, doubling the pay on weekends, holidays and nights. "But the poor developers" is just the "we will hurt this poor innocent puppy"-defense. The evil ones are the ones inflicting the hurt, the greedy companies. Not the reporters.

        • IlikeKitties a day ago

          Overall, I share your reasoning and would concur mostly but there are some rather important caviats, especially regarding this one:

          > The only valid argument would be that the victims might not be reading the news about your disclosure on a holiday. But since you are again arguing about software used by a lot of companies (as opposed to private users), I don't see a problem there.

          Let's say MegacorpA is a big Software Vendor that makes some kind of Software other Companies use to manage some really sensitive user data. Even if MegacorpA fixes their stuff on the 25th 2 hours after they got an e-mail from you, all their clients might not react that fast and thus a public disclosure could cause massive harm to end users, even if MegacorpA did everything right.

          Ultimately, I guess my argument is that there's not a one size fits all solution. But "responsible disclosure" should be reserved for companies acting responsibly.

    • delusional a day ago

      > "Responsible" disclosure is paradoxically named because actually it is completely irresponsible.

      It's only paradoxical if you've never considered the inherent conflicts present in everything before.

      The "responsible" in "responsible disclosure" relates to the researchers responsibility to the producer, not the companies responsibility to their customers. The philosophical implication is that the product does what it was designed to do, now you (the security researcher) is making it do something you don't think it should do, and so you should be responsible for how you get that out there. Otherwise you are damaging me, the corporation, and that's just irresponsible.

      As software guys we probably consider security issues a design problem. The software has a defect, and it should be fixed. A breakdown in the responsibility of the corporation to their customer. "Responsible disclosure" considers it external to the software. My customers are perfectly happy, you have decided to tell them that they shouldn't be. You've made a product that destroys my product, you need to make sure you don't destroy my product before you release it.

      The security researcher is not primarily responsible to the public, they are responsible to the corporation.

      It's not a paradox, it's just a simple inversion of responsibility.

      • einsteinx2 21 hours ago

        > The security researcher is not primarily responsible to the public, they are responsible to the corporation.

        Unless the researcher works for the corporation on an in-house security team, what’s your reasoning for this?

        Why are they more responsible to the corporation they don’t work for than for to the people they’re protecting (depending on the personal motivations of the individual security researcher I guess).

      • drowsspa 20 hours ago

        With "simple reversion of responsibility" do you mean your twisted logic of "everyone should think first and foremost about my profits"?

  • oezi a day ago

    The problem is just one of legislation of liability. Car manufacturers are ordered to recall and fix their cars, but software/hardware companies face just too little pressure. I think customers should be able to get full refund for broken devices (with unfixed CVE for example).

    • mjevans 18 hours ago

      The devices and core functionality (including security updates, which are fixes to broken core functionality) must survive the manufacturer and should not require ongoing payments of any type*. (new updates being created? maybe, access to corrections to basic behavior? Bug / security fixes should remain free.)

      • oezi 16 hours ago

        Yes. I would envision that it is at least 5 years of such updates fixes and another 5 years available for purchase capped at 20% of device price.

        All manufacturers must pay an annual fee to an insurance scheme which covers the case of insolvency of manufacturers.

  • okanat a day ago

    Citing CGPGrey: Solutions that are the first thing you can think of are terrible and ineffective.

    Good safety/security culture encourages players to not hide their problems. Corporations are greedy bastards. They'll do everything to hide their security mistakes.

    You are also making legitimate, fixable in a month issues available for everyone which increases their chances to be exploited a lot.

    • IlikeKitties a day ago

      > You are also making legitimate, fixable in a month issues available for everyone which increases their chances to be exploited a lot.

      I don't think you can fathom the amount of people that have phones with roughly 3 years of no android updates as their primary device with which they use all the digital services they use, Banking, Texting, Doomscrolling, Porn, ...

      Users, especially the most likely to be exploited are already vulnerable to so much shit and even when there's a literal finished fix available, these vendors do shit about it. Only when their bottomline is threatened because even my mom knows "Don't buy anything with ASUS on it, your bank account gets broken into if you do" will we see change.

      • okanat 20 hours ago

        > I don't think you can fathom the amount of people that have phones with roughly 3 years of no android updates as their primary device with which they use all the digital services they use, Banking, Texting, Doomscrolling, Porn, ...

        I do. I'm an embedded software developer in a team that cares about having our software up-to-date a lot.

        > Users, especially the most likely to be exploited are already vulnerable to so much shit and even when there's a literal finished fix available, these vendors do shit about it. Only when their bottomline is threatened because even my mom knows "Don't buy anything with ASUS on it, your bank account gets broken into if you do" will we see change.

        Yes individuals are quite exploitable. That's why I really like EU's new regulations Cyber Resiliency Act and new Radio Equipment Directive. When governments enforce reasonable disclosure and fixing timelines, then threaten your company's ability to sell things in a market alltogether, if you don't comply, it works wonders. Companies hate not being able to make money. So all the extra security policies and vulnerability tracking we have been experimenting with and secure-by-default languages are now the highest priority for us.

        EU regulation makes sure that you're not going to be sold a router that's instantly hackable in a year. It will also force chip manufacturers to have meaningful maintenance windows like 5-10 years due to pressure from ODMs. That's why you're seeing all the smartphone manufacturers have extended support timelines, it is not pure market pressure. They didn't give fuck about it for more than 10 years. When EU came with a big stick though...

        Spreading word-of-mouth knowledge works until a point. Having your entire product line being banned entering a market works almost every time.

      • layer8 19 hours ago

        The fact about people running outdated OS versions is totally true, but it also indicates that the risk of being vitally harmed by those vulnerabilities is quite low in reality, if you’re not an individually targeted person. And that’s why not a lot of people care about them.

        • int_19h 5 hours ago

          In this day and age, you're just as likely to be targeted by a large-scale ransomware operation that just happens to find your vulnerable device by network scanning, for example.

      • einsteinx2 21 hours ago

        I’m not sure that’s a great example as they would be vulnerable to many responsibly disclosed and previously fixed issues anyway since they never update.

        In fact they would be just as vulnerable to any new responsibly disclosed issues as they would if they were immediately “irresponsibly” disclosed because again, they never update anyway.

    • Avamander 21 hours ago

      > Good safety/security culture encourages players to not hide their problems. Corporations are greedy bastards. They'll do everything to hide their security mistakes.

      This is why I despise the Linux CNA for working against the single system that tries to hold vendors accountable. Their behavior is infantile.

  • hamandcheese 21 hours ago

    Business idea. Maybe this already exists. A disclosure aggregator/middle man which:

    - protects the privacy of folks submitting

    - vets security vulns. Everything they disclose is exploitable.

    - publishes disclosures publicly at a fixed cadence.

    - allows companies to pay to subscribe to an "early feed" of disclosures which impact them. This money is used to reward those submitting disclosures, pay the bills, and take some profit.

    A bug bounty marketplace, if you will. That is slightly hostile to corporations. Would that be legal, or extortion?

    • hashstring 15 hours ago

      Thought of something along the lines of this too before.

      I think there is serious potential for this.

    • ajcp 19 hours ago

      It does indeed already exist in many sectors as trade publications and journalism.

    • darkwater 21 hours ago

      Isn't that basically HackerOne?

      • Avamander 21 hours ago

        No, HackerOne gets paid by the companies, so they're heavily incentivized to work for their benefit.

        I've had three really bad experiences with unskilled H1 triagers that the next vuln I find from a company that uses H1 will go instantly public. I'm never going to spend that much effort again, to get a triager that would actually bother to triage.

      • asmor 21 hours ago

        except there you spend several months walking an underpaid person in india who can barely use a shell though reproduction steps, get a confirm after all that work and the vendor still ignores you

      • xmodem 21 hours ago

        HackerOne, BugCrowd, et al don't appear to make any serious effort to vet reports themselves.

        • hashstring 15 hours ago

          Is that true? I thought you could pay for a H1 service that basically had professionals triaging the vulnerabilities and only pass on the correct ones?

          • ycombinatrix 11 hours ago

            Our company pays for one of these third party triage services for H1.

            The quality is seriously lacking. They have dismissed many valid findings.

            • hashstring 9 hours ago

              Ah thank you for the info!

              From what I understood, the service is also (very) expensive. Wild.

  • pjmlp 20 hours ago

    As I keep saying, liability like in any other industry.

    Most folks don't put up with faulty products unless by decision, like those 1 euro/dollar shops, so why should software get a pass.

  • fulafel 18 hours ago

    Or we could just have regulation or at least the same product liability for software as everything else.

  • lofaszvanitt 14 hours ago

    Just post it next day, when found. That will be the proper incentive and losing face also contributes to better security next time.

Gys a day ago

> I asked ASUS if they offered bug bounties. They responded saying they do not, but they would instead put my name in their “hall of fame”. This is understandable since ASUS is just a small startup and likely does not have the capital to pay a bounty.

:(

  • eterm a day ago

    It's understandable for such small companies, like Cisco, that does the same for the myriad of online offerings they've acquired over the years.

    Cisco have gone even further, by forgetting about their security announcements page, so any recognition is now long lost into the void.

    • ang_cire a day ago
      • eterm 18 hours ago

        When I reported something, and this was probably around 8 years ago, they only had bounties for their equipment, not for "online properties".

        I reported a vulnerability in some HR software they owned, but alas I can't even find where it used to live on the internet now.

        • ang_cire 13 hours ago

          The 2 that are live there definitely cover software (one doesn't deal in hardware at all).

  • Xelbair a day ago

    no bug bounty, onto black market of exploit it goes.

    that or full public disclosure.

    • hypercube33 19 hours ago

      Maybe something for gamers Nexus to light a fire

    • LadyCailin 19 hours ago

      I wonder how worried they would get if more people actually started selling exploits on the black market, instead of reporting and not getting a bug bounty. If you don’t offer a bug bounty program in the first place, my gut feeling is that they probably wouldn’t care in that case either. Either way, this is a super good reason to not do business with such a company.

      • NooneAtAll3 5 hours ago

        I wonder if centralized "sell program vulnerabilities here" government shops can be set up

        While intelligence agencies are an obvious benefitiary, this would also give leverage of government over capital

      • Xelbair 19 hours ago

        if the fire it lit under them, after their software leads to widespread hack - they will care.

        that's the point - to put pressure on them to CARE.

  • throaway920181 a day ago

    This makes me never want to buy another ASUS product again.

    • pohuing a day ago

      For me it's them lying about providing a way to unlock the bootloader of my soon to be 1000€ paperweight(2 android updates only) called an Asus zenfone 10.

      • jeroenhd 20 hours ago

        If they actually lied about it, that kind of money could be worth it to take them to (whatever your local equivalent of) small claims court over.

        • pohuing 13 hours ago

          I'm in Germany which makes it a bit harder. Someone in the UK went through the trouble and all they got was an offer for a refund or an insanely outpriced option to downgrade the os iirc.

          About the lie, they've repeated multiple times this would be an option a year ago...

          See https://www.reddit.com/r/zenfone/comments/1ccy11g/asus_is_wo...

      • FirmwareBurner 21 hours ago

        Out of curiosity, what got you to spend 1000 Euros on a Zenphone 10 phone when Samsung S23 was net superior and cheaper and provides like 5 years of updates? It's not like previous phones from Asus had a better track record. I kept waring people to stay away form the Zenphone yet the online community kept overhyping it for some reason as the second coming of Christ or something.

        • pohuing 13 hours ago

          What cempler said. I tried the dongle approach when the jack in my pixel 4a was failing but found I didn't like it. Having the cable go out the bottom in the center is a terrible place for me, as I rest my phone on my outstretched pinky. The zenfone ticked all boxes on paper and in reviews. Great chipset, solid build, a form factor fitting my tiny hands(though in retrospect it's so heavy that my pinky hurts after a couple hours of reading). And a headphone jack which I use to plug my phone in my stereo and my Sennheiser headphones. Really the jack is the primary reason I got this phone. Coupled by the fact that until now all zenfones had a hassle boot loader unlock and a decent rom community it really was the best choice on paper. God damn it Asus, I wasn't aware they're that dodgy :/

        • campl3r 21 hours ago

          Zenfone is smaller and has a headphone jack. It's the superior phone

          • FirmwareBurner 21 hours ago

            It is virtually the same size[1] as the era equivalent S23.

            I don't think a headphone jack which you can get via a super cheap USB-C adaptor, makes the justification for a 1000 Euro paperweight.

            [1] https://www.gsmarena.com/size-compare-3d.php3?idPhone1=12380...

            • dvratil 20 hours ago

              The problem I found about the adaptors is that you can't charge your phone and listen to music at the same time.

              I have an older car with an old stereo where the only external input is via jack. Worked perfectly fine with my old phone. When I got a new Samsung, I went through the hassle of trying several "combined usb-c charger and audio jack adaptor" only to eventually find out they can only work in on mode or the other, not both at the same time. I ended up throwing away my old phone holder and spending even more money on one with built-in wireless charging so I could both listen to a damn music and charge my phone at the same time while driving.

              • jicksaw 14 hours ago

                Just a FYI for anyone that has the same problem. The reason the adapters don't work is that they're operating in Audio Accessory Mode. The signal comes from the phone's DAC, and is passed through the data lines of the USB connector to the 3.5 mm jack. Problem is, the charging mode also uses those lines to communicate. Thus it can't do both.

                The solution is to use a USB hub with an integrated DAC. I use an older version of this: https://satechi.net/products/mobile-pro-hub-sd

              • theandrewbailey 20 hours ago

                > only to eventually find out they can only work in on mode or the other, not both at the same time.

                I can't tell you how many times I've bought something small that should reasonably do two things at once, but can't. Literal e-waste garbage.

                • FirmwareBurner 19 hours ago

                  Isn't the 1000 Euro phone a bigger e waste?

            • lucb1e 8 hours ago

              I bought several of those adapters. The issues are these:

              0. They don't work on all models. Not product lines, e.g. not "all Pixel phones" or so, no, reviews mention "works with Pixel 3 but not Pixel 3a". You need to either waste a bunch of resources sending various ones back and forth, or scour listings until you find one where a review mentioned it works with the model you have. It turns out that all the ones I ordered work on the two USB-C phones I have by now (one from work, one privately) but...

              1. The quality of the mic conversion is so bad that people cannot understand what I'm saying. It's described as though I'm speaking while holding the phone under water. Plugging the headphones into my work laptop makes it clear that the mic itself is not the problem, nor the meeting software or my WiFi or anything

              2. Loose contacts in most of the converters, if not from the start then after a handful of uses. The headphone cable itself somehow doesn't have that problem, so I don't think that's a me problem (many reviews also mentioned it)

              3. You can't charge at the same time. I've tried wireless charging but that makes the device overheat. There are adapter models that will let you also plug in a power cable, but I didn't buy one for some reason. Probably all of them had bad reviews about all of the aforementioned problems and I didn't find a single one that sounded like it was worth a try

              4. You need to plug it in at the right time. One of the converters needed to be plugged in before joining the meeting. Another one after. The OS or meeting software (not sure) wouldn't route the audio correctly otherwise

              And cheap phones manage to include headphone jacks somehow. It's just a status symbol when manufacturers exclude it from more expensive models, it doesn't seem to serve any purpose as the Zenphone 10 shows by having it and also being great on all other fronts -- except one.

              > a 1000 Euro paperweight

              It's actually 700€.

              It does everything I want. After searching a few days for what models are small, have a headphone jack, and are capable of running Android 14 or so, I was so happy to find that the Zenphone 10 checked all boxes. Then I found out why it didn't initially show up: Asus was the manufacturer that I had previously excluded because you can't root the device. It's not your device: the manufacturer maintains control over what you can and cannot do with it. You can't make full-system backups, for example, because access to your apps' data folders isn't part of what they allow you. The device was easily worth the 700€ because it sounded like I could finally stop wasting my time on choosing which compromise I wanted to make (huge size, no jack, or old chipset were the main options). Finding out there was a dealbreaker after all felt like an ice bath. I just won't buy something where I can't access my own data and make a fricking backup

GuestFAUniverse a day ago
antmldr a day ago

>so I could see if anyone else had a domain with driverhub.asus.com.* registered. From looking at other websites certificate transparency logs, I could see that domains and subdomains would appear in the logs usually within a month. After a month of waiting I am happy to say that my test domain is the only website that fits the regex, meaning it is unlikely that this was being actively exploited prior to my reporting of it.

This only remains true in so far as no-one directly registered for a driverhub subdomain. Anyone with a wildcard could have exploited this, silent to certificate transparency?

  • ZoneZealot a day ago

    A wildcard certificate is only for a single label level, '*.example.com.' would not allow 'test.test.example.com.', but would allow 'test.example.com.'. If someone issued a wildcard for '*.asus.com.example.com.', then could present a webserver under 'driverhub.asus.com.example.com.' and be seen as valid.

    • throaway920181 a day ago

      Yes... I believe you've successfully reworded what your comment's parent said.

      • a2128 a day ago

        I think the point is that it wouldn't be silent to certificate transparency, because having a certificate for *.asus.com.example.com would be a clear indication of something suspicious

      • ZoneZealot a day ago

        Parent comment is making a point that it might have been possible for an attacker to avoid discovery via certificate transparency logs, because anyone 'with a wildcard' could pull off the attack, which is not correct.

        I'm pointing out that a wildcard at the apex of your domain (which is what basically everyone means when saying 'a wildcard'), would not work for this attack. Instead if you were to perform the attack using a wildcard certificate, it would need to be issued for '*.asus.com.example.com.' - which would certainly be obvious in certificate transparency logs.

        • smileybarry 15 hours ago

          Can you still publicly apply for a “*.*.mydomain.com” certificate? IIRC a wildcard cert starting with “*.*.” allows you to chain 2+ names with that cert, I think? (E.g.: “*.*.example.com” cert would match “hello.world.and.hi.com.example.com”)

          • ZoneZealot 12 hours ago

            With public CAs, you can only apply for a wildcard at a single label. You can't have nested wildcards.

            RFC6125 limits wildcards to a left-most label (6.4.3. paragraph 2): https://www.rfc-editor.org/rfc/rfc6125.html#section-6.4.3

            I don't know of any CA that allows for wildcard characters within the label, other than when the whole label is a wildcard, but it is possible under that RFC.

            The CA/Browser Forum's baseline requirements dictates how any publicly trusted CA should operate, and it defines a wildcard certificate in section 1.6.1 (page 26) here https://cabforum.org/working-groups/server/baseline-requirem...

            > Wildcard Certificate: A Certificate containing at least one Wildcard Domain Name in the Subject Alternative Names in the Certificate.

            > Wildcard Domain Name: A string starting with “*.” (U+002A ASTERISK, U+002E FULL STOP) immediately followed by a Fully‐Qualified Domain Name.

            Now of course with your own internal CA, you have complete free reign to issue certificates - as long as they comply with the technical requirements of your software (i.e. webserver and client).

            Also note that a cert issued as '..example.com.' would only match 'hi.com.example.com.', not an additional three labels.

  • MrBruh a day ago

    Nice idea, just checked it now and can confirm there was nothing suspicious in the wildcard records.

  • kstrauser 5 hours ago

    Furthermore:

    - Would a self-signed cert work? Those aren’t in transparency logs.

    - Does it have to be HTTPS?

  • ethan_smith a day ago

    You're right about the wildcard certificate blind spot. An attacker with a wildcard cert for .example.com could have exploited this without appearing in CT logs specifically for driverhub.asus.com. domains. This is why CT log monitoring alone isn't sufficient for detecting these types of subdomain takeover vulnerabilities.

    • ZoneZealot a day ago

      It's 'driverhub.asus.com.example.com.' not 'driverhub.example.com.', therefore entirely discoverable in CT logs by searching for (regex): (driverhub|\*)\.asus\.com\.

satyanash a day ago

> MY ONBOARD WIFI STILL DOESN’T WORK, I had to buy an external USB WiFi adapter. Thanks for nothing DriverHub.

All this, for literally nought

  • Avamander 21 hours ago

    It's a nice blogpost though.

  • ThrowawayTestr a day ago

    The latest wifi drivers don't work, you have to use an older version.

rkagerer 16 hours ago

I asked ASUS if they offered bug bounties. They responded saying they do not, but they would instead put my name in their “hall of fame”. This is understandable since ASUS is just a small startup[1] and likely does not have the capital to pay a bounty.

[1]: https://companiesmarketcap.com/asus/marketcap/

  • 93po 13 hours ago

    alternatively, sarcasm.com ;)

    • lucb1e 7 hours ago

      I'm surprised to find that this is just a random person's blog. Was very prepared for an ad page, scalped domain, or some corporation trying to make money out of it. On the sadder side, it doesn't seem like this person makes any use of the domain's name at all; they could have had firstlast.cctld for their blog and given this to someone who wants to put a sarcastic joke on it. But better this than ad farms so I don't blame them for keeping it!

liendolucas a day ago

> This is understandable since ASUS is just a small startup.

A small startup with a marketcap of only 15 B. What is more than understandable is that you give a shit not only about your crappy products but the researcher that did a HUGE work for your customers.

I truly feel bad for researchers doing this kind of work only to get them dismissed/trashed like this. So unfair.

The only thing that is ought to be done is not to purchase ASUS products.

cobalt60 19 hours ago

MY ONBOARD WIFI STILL DOESN’T WORK, I had to buy an external USB WiFi adapter. Thanks for nothing DriverHub.

I feel sorry for this guy, having deviated from the original issue. Though it'd only took a couple of seconds to note the WLAN chipset from specs or OEM packaging and then heading to station-drivers.

This was also the very reason I dislike Asus, I don't want a BIOS flag/switch that natively interact with a component in OS layer.

IshKebab a day ago

Wow, no bug bounty is insane. No more ASUS products for me...

  • _pdp_ a day ago

    they are a "small startup"

    • charcircuit a day ago

      They have over 14500 employees. I wouldn't call that small.

      • Pesthuf a day ago

        You missed a small amount of sarcasm there

        • 7bit a day ago

          "Small" as in "small startup"

          • stavros a day ago

            I think that was exactly the joke.

  • swinglock a day ago

    Both Asus software and customer support is atrocious and always has been.

sigmaisaletter a day ago

Obligatory "Scumbag Asus" video link:

Invidious https://inv.nadeko.net/watch?v=cbGfc-JBxlY

YouTube https://youtube.com/watch?v=cbGfc-JBxlY

"ASUS emailed us last week (...) and asked if they could fly out to our office this week to meet with us about the issues and speak "openly." We told them we'd be down for it but that we'd have to record the conversation. They did say they wanted to speak openly, after all. They haven't replied to us for 5 days. So... ASUS had a chance to correct this. We were holding the video to afford that opportunity. But as soon as we said "sure, but we're filming it because we want a record of what's promised," we get silence."

Edit: formatting

  • jeffparsons a day ago

    So are there any "basically respectable" motherboard manufacturers? Or is there a similar story about each of the big players?

    Asking for a friend who is thinking about building a new PC soon.

    • Arnavion 15 hours ago

      Asrock (sub-brand of Asus but seemingly independent in the product and dev side) has been fine for me over the ~10 years I've bought their mobos. There was the thing a few months ago with X870 mobos that were apparently frying CPUs, but I think that was not sufficiently proven to be their fault?

      That said, in their X670 / B650 they have the same setting as what this article is about, and it could be equally as broken on the software side as Asus's is, but I wouldn't know because I don't use Windows so I disabled it.

      • oynqr 3 hours ago

        Asus and AsRock are separate since 2010.

        • Arnavion 3 hours ago

          Its new owner since 2010 is still part of the Asus group, but sure it's technically a different company from Asus proper.

    • encom a day ago

      All the consumer brands are pozzed. My last build (i7-14700K) used an MSI board. Their secureboot is still broken. The BIOS setup is complete mess, and all the settings are reset after a BIOS update. I have to unplug and replug my USB keyboard after a poweroff, or it doesn't work. But I insisted on a board without RGB lights, and that limited the selection. Computers are over.

      • ribcage 20 hours ago

        There really needs to be an open source project for a PC motherboard.

        • matheusmoreira 19 hours ago

          Just a few days ago people were talking about this on the kicad discord. A chinese team made an open hardware x86_64 motherboard and published it not too long ago. Then they were essentially wiped off the face of the planet.

          That was the day I learned you literally cannot develop a computer motherboard without Intel's permission. Turns out Intel is no different than the likes of Nintendo.

          • mrheosuper 6 hours ago

            I doubt that.

            Chinese "tinker" has been making countless "x99" motherboard that reuse consumer chipset like h81 or b85.

            I don't think Intel approve that

          • Arnavion 15 hours ago

            Yes, if you want to go that route, you'll be better off going with RISC-V.

  • Barbing a day ago

    This makes me angry, so can anyone think of a legitimate steelman of their position?

    Expect my view is consistent with reality, though: they’re chasing profits and getting away with it, so why go on the record and look bad if they can ignore & spend that time on marketing.

    • vachina a day ago

      ASUS doesn’t want to deal with the social media horde, who can and will cherry pick words and take things out of context.

      If a person comes to talk business with a camera attached to his head, I know he does not come in good faith.

      • sigmaisaletter 11 hours ago

        It's a journalist coming, because you said you want to talk to the journalist, because of the bad press you had before, because you fucked up.

        Seems fair to take a camera.

ritcgab 16 hours ago

This is really a well written blog post.

The practice of "injecting pre-installed software through BIOS" is such a deal-breaker. Unfortunately this seems to be widely adopted by the major players in motherboard market.

smileybarry 15 hours ago

I like ASUS products but I disable the UEFI-installed support app every single time. IIRC it used to be a full ROG Armory Crate installation, which is really annoying to uninstall.

When ASUS acquired the NUC business from Intel, they kept BIOS updates going but at some point a “MyASUS” setup app got added to the UEFI like with their other motherboards. Thankfully, it also had an option to disable and IIRC it defaults to disabled, at least if you updated the BIOS from an Intel NUC version.

saghm 8 hours ago

I have a similar model motherboard from ASUS in my desktop I had custom built a few years ago, and I've mostly just been annoyed that I have to have Windows installed to be able to even update the BIOS at all given that the previous one I had (which I think was also from them?) would just let me do it over ethernet if I booted directly into the BIOS setup menu. Now I have much larger concerns in addition to the risk of not updating as frequently seeming much larger...

  • Arnavion 4 hours ago

    Any mobo will let you download the firmware file to a FAT32-formatted USB drive etc, and then use that to update the UEFI within the UEFI UI.

    Yes some mobos have the feature in their UEFI to connect to the internet and download the update, but it's best to not rely on that since you have no idea how securely that is implemented. Considering how the submitted article is about a shitty implementation in a regular Windows program, you can be sure the implementation in UEFI is even shittier (may not check certs, may not even use HTTPS, etc). Asrock used to have an "Internet Flash" feature in their UEFI and then suddenly removed it, probably because it was too insecure to fix.

Avamander 21 hours ago

A few of the drivers they install (or want to install) are also on Microsoft's vulnerable actively exploited driver blacklist. So that's fun, they have no intention of fixing it because they do not support "third party software". I'm also pretty sure their installer doesn't work without unencrypted HTTP traffic being let through. Plus they keep offering bloatware as "updates" to you.

On top of it all, the software they offer is slow and buggy on brand-new hardware.

But most of those issues also exist with AMD's or Gigabyte's drivers, most hardware vendors seem trashy like that. Like, if you install Samsung Magician (for their SSDs) then that even asks you if you're in the EEA (because of the privacy laws I suspect), it's absolutely crazy.

Microsoft should make it *significantly* harder to ship drivers outside of Windows Update and they should forbid any telemetry/analytics without consent.

I find Linux's hardware support model significantly nicer, although some rarer things do not work OOB, there's none of this bullshit.

  • userbinator 6 hours ago

    > Microsoft should make it significantly harder to ship drivers outside of Windows Update

    No. No no no no no no no NO! That just centralises even more control to MS.

    What we really need is for more people to develop open-source Windows drivers for existing hardware, or encourage the use of Linux.

  • matheusmoreira 19 hours ago

    Hardware manufacturers consistently ship out the worst softwares in existence. It's just a cost center to them. They've already sold the thing, it doesn't matter anymore.

    My laptop has a fan and keyboard LED application that requires kernel access and takes over a minute to display a window on screen. Not to mention being Windows only.

    Words can barely describe just how aggravating that thing was. One of the best things I've ever done is reverse engineer that piece of crap and create a Linux free software replacement. Mine works instantly, I just feed it a configuration file. I intend to do this for every piece of hardware I buy from now on.

    • Avamander 19 hours ago

      I really wish someone made such software for ASUS and Gigabyte both, without dangerous kernel drivers.

      In that sense fwupd has been an amazing development, as there's now a chance that you can update the firmware of your hardware on Linux and don't have to boot Windows.

      • matheusmoreira 16 hours ago

        Actually having hardware lying around to reverse engineer is the limiting factor for me. I suppose I could give it a shot if people who own the devices sent me the required data. I'd need their help with testing.

        USB stuff was really nice to work with. Wireshark made it really easy to intercept the control commands. For example, to configure my keyboard's RGB LEDs I need to send 0xCC01llrrggbb7f over the USB control channel; the ll identifies the LED and rrggbb sets the color. Given this sort of data it's a simple matter to make a program to send it.

        Reverse engineering ACPI stuff seems to be more involved. I wasn't able to intercept communications on the Windows side. On Linux I managed to dump DSDT tables and decompile WMI methods but that just gave me stub code. If there's anything in there it must be somehow hidden. I'm hoping someone more experienced will provide some pointers in this thread.

cebert 20 hours ago

I am assuming the timeline posted in this article is a year off, and the author means 2024 instead of 2025.

  • psolidgold 15 hours ago

    Why would you assume that instead of the more likely scenario of them using DD/MM/YYYY format? The CVE linked has a date in 2025. Not everyone uses the insane American date formatting.

rasz 10 hours ago

> When submitting the vulnerability report through ASUS’s Security Advisory form, Amazon CloudFront flagged the attached PoC as a malicious request and blocked the submission.

Reminds me of the time I reported SQL disclosure vuln to Vivaldi and their WAF banned my account for - wait for it - 'SQL injection attempt' so hard their admin was unable to unlock it :)

serguzest 13 hours ago

It is not just a mainboard issue. I had an asus mechanical keyboard. After I started using it, Windows kept installing software and background services in system that is a listening port. I kept deleted it manually and no matter I did, windows kept installing it without my consent. It was really annoying.

thwaysec a day ago

[flagged]

  • kenjackson a day ago

    Why is this not an RCE?

    • thwaysec a day ago

      [flagged]

      • sitharus a day ago

        I suggest you re-read the article carefully. The author shows that a website can be created that will make the asus software download and execute an attacker controlled app from a server the attacker controls.

      • swznd a day ago

        He changed the origin, which means any website can communication with 127.0.0.1:53000.

      • wonnage a day ago

        Did you not see the PoC video?

        • thwaysec a day ago

          Seems I was wrong. I am utterly surprised at the lack of security in modern browsers. Yes, that backend is misconfigured, but why this request is even allowed to take place in the first place is utterly mindblowing to me.

          • bdavbdav 21 hours ago

            What would you suggest the browser did? All it’s does is sends the correct origin - as it would be - downloads.asus.badsite(.)com

          • shakna a day ago

            Where did the browser go wrong, here? They followed all security practices. The browser isn't what is running the payload.

            Unless, you're suggesting that nobody should be able to download programs, unless blessed by some large company?

            • thwaysec 21 hours ago

              The browser is allowing remote code to talk with 127.0.0.1

              • shakna 19 hours ago

                Right... And that's only blocked in the host asks for it via CORS, or Same-Origin policies. Because otherwise you break any combination of apps. It's up to the server on the localhost not to blindly trust. And has been since the beginning.

                • thwaysec 18 hours ago

                  Might have been there since the beginning, but doesn't make it less surprising or bad. That's a _ridiculously_ bad thing to allow. Any website to talk with just about ANY port on your local machine. Incredible.

          • immibis a day ago

            Because the browser tells the backend exactly where the request came from, and the backend agrees to allow requests from there.

  • buckle8017 a day ago

    Are you confused.

    This is an RCE.

nexoft a day ago

I've read Acer for some reason, and was surprise and disappointed it is actually Asus.

ikekkdcjkfke a day ago

All our motherboards, the root of trust, are made in Taiwan. All props to their industriousnes and agility but there should be western alterntive in that can be purchased?