mac terminal rm read only

If your Mac has become frustratingly slow, there are a number of ways you . to add new advice and tips for speeding up your Mac computer.

It has nothing to do with releasing along with FaceTime integration, and everything to do with This is basically Apple violating their own "responsible disclosure" policy and announcing a 0-day vulnerability in OS X. This is the whole point of responsible disclosure: maybe the vulnerability is being used in the wild, but by delaying release of it until the vendor can patch it, the potential for expoitation is greatly reduced. You say that as if it's not a big deal As I said, the iOS bug was almost certainly already being exploited. Delaying the release of a fix for that seems like the absolute last thing anyone should be suggesting they do.

As a fellow Mac user: your apologism is showing. There is no justification for this bug. It never should have shipped. It never should have gone unnoticed for so long. It never should have been announced prior to a patch being available. No matter how you slice it, Apple failed miserably, and "iOS was probably being exploited" is not an excuse. Apple has how much money? They could have afforded it. They were simply too incompetent, after a chain of incompetence, to do so. In arguing that they should add more people in order to ship faster, the only incompetence on display is your own.

That's not how software development works, which you should know if you've done it professionally. It's a one line change. The patch has to be validated across the entire testing matrix of their entire product line.


  • Making performance monitoring easy for Mac systems.!
  • photosynth like app for mac?
  • mac skype hide dock icon?
  • mac os x returns to login screen.
  • Get latest updates about Open Source Projects, Conferences and News..

That is a trivially parallelizable problem. Don't cargo cult 'common wisdom'; the only incompetence on display here is your axiomation of things you don't understand. If you'd meant QA, you would have said QA, not engineering. You don't want engineers doing QA, which you would also know if you actually worked in the industry. They're notoriously bad at it.

You'd also know that a test cycle takes a certain amount of time, and for something as complex as OS X, that amount is going to be measured in days per configuration , and there's nothing you can do about that -- adding more people will, again, just slow it down.

Admit you don't know what you're talking about and move on. Or just stop talking, whatever. I worked at Apple, in that department, so yes, I'm aware of what I'm saying and why. Stop trying to acquire internet points by being a jerk. Though, really, I should just accept this absurd statement, since it amounts to you admitting your own incompetence.

This from the guy who decided his scintillating contribution to the thread would be redundantly accusing people of "apologism" and "incompetence". You do understand the people who actually do work at Apple are human beings, and that you are flinging insults at them, right? Do you not already have a bridge to troll under? Yes, and I know who they are.

The point of responsible disclosure as opposed to telling the company and then not telling anyone is to force the company into action, and force them to fix it with the threat of public disclosure later. If this is true, then their process could use some adjustment. Contrast with Google Chrome which has the regular motion of changes going through channels, but the ability to update virtually all clients within a matter of hours if a critical issue is found. I realize there is a lot more QA necessary for an OS update, but I'm not convinced that a fix for this specific bug would have taken a long time to QA.

Certainly not anywhere near as long as we've waited for this update, or as long as a lot of people will delay installing it because it is huge. The hell with GM process. There should be a way to push out simple changes like this, as soon as possible, for cases like this which is very important. That's a great way to let a bad build slip out, which would do significantly more harm than any bug it could possibly hope to fix.

Which is why you need a process for shipping out emergency fixes. Microsoft can do it in 24 hours, and on the desktop, the impact of a broken build for Microsoft is staggeringly large when compared to Apple. The GM process is there precisely to stop bugs like this malingbit into production. Who knows how many potential bugs it has stopped. You can't know. To play the devil a bit, their process still needs some work, there isn't a good reason why they couldnt have released this patch in its own approval process simultaneously with a higher priority for staff to choose it over facetime.

Thanks for helping keep SourceForge clean.

At Pwn2Own each year, how many browsers have vulnerabilities that allow remote code execution? All of them. How many of these vulnerabilities are zero-days? A significant number of those. This happens every single year. Even the advanced protections in e. Chrome don't stop new vulnerabilities from being found on a regular basis. And all of these products are deployed to millions of users.

You could complain about Apple's response to this bug, and that might be a reasonable complaint to make. At least Google patches bugs quickly when they surface at Pwn2Own. But that's different from claiming the bugs shouldn't have existed or should have been caught before making it into production. Bugs are fundamentally hard to find, and it's not really getting any easier.

While it's true that almost all software has bugs that can result in exploits, I think most of the exploits used in Pwn2Own are typically the result of complex interactions between subsystems that are hard to predict. As software gets more complex, the attack surface increases. The Apple bug isn't really in that class of exploit. One of the things that worries me is that this bug would have been caught so easily with basic unit tests.

Seems unlikely. Or perhaps this component didn't warrant an extensive test suite? I hope not! Not really sure what the explanation is. I mean, I certainly can't claim that all my code is run through extensive tests before every deploy, but then I'm not working on the security tools that underpin an entire operating system.

In this case, the very test you're describing would not have worked. For a better writeup, see agl's post[1] on the matter. The basic gist of it: On affected systems, the server may use any combination of private key and certificate. Most SSL libraries used on the server side will make sure the moduli of cert and private key match and abort if this isn't the case. What would have caught the bug: automatic code indentation or any sort of compile-time warnings about dead code.

I see what you mean, the amount of work and foresight needed to predict the bug and write a test for it does seem unrealistic in that light. Two entire operating systems. One single library. As I said: It's not just another security bug. It's an easy to spot bug, in the most critical part of the code, of a fundamental security library. THIS is what makes it unacceptable. It pretty much means the change has never gone through code review, or has been planted.

While working at AWS and seeing outages being posted here, and the wildly inaccurate summaries guesses of what the problems were, and the wildly simplistic fixes that are assumed to easily be put in place I can say: systems like this are more complicated than you think they are. Agreed, everyone enjoys playing armchair critic and in the majority of the time have no idea how the internal systems are structured or managed.

Hence the "or". In this case, stupidity is equally unacceptable. To err is human. In the grand scheme of things, it's a PR fuck up; nothing more. I doubt it affects you directly enough as a Gentoo user to have such a strong reaction anyway, but if it make you feel superior, then all power to you. The error is not that of the programmer's. The fault is not in code, it's in processes that were chosen by management.

Indeed, to err is human.

Get latest updates about Open Source Projects, Conferences and News.

This is why you're a negligent jackass if you don't plan for errors and build multiple systems to prevent and detect them, at least until computers start programming themselves for us. I am interested to hear your opinion: at what point should a vulnerability be considered unacceptable? I dont even know what that would mean, to make an existing vulnerability 'unacceptable'. Vulnerabilities of all kinds exist. We need to find them, learn from them and we need to fix them. Getting hung up on whether or not they are 'acceptable' is just kind of weird.

Bad stuff happens, incompetence happens, mistakes happen.

Subscribe to RSS

None of that is 'acceptable', but it happens just the same. Creating an environment where some kinds of mistakes are 'unacceptable' doesn't eliminate those kinds of mistakes, it just causes people to stop reporting them. Complaining about their release cycle makes some sense. This is so different from Pwn2Own I don't even know where to start. This failure case shows up in the most basic test case of what the library is supposed to do. The whole point of having certificate validation is that you identify invalid certificates. Try to come up with a reason why there are no regression tests with the library and why there wasn't a regression test that verified that for the default case, an invalid certificate was reported as such.

Have you seen the content of the security update? While this is too long IMO, Similarly critical bugs in windows have been left unfixed for years. While that may have been true, I dare say that the burnt child that is today's Microsoft would not have mishandled this so horribly. It made a lot of sense from a QA perspective to do it just the way they did. I'm fully on board with the "bugs happen" point of view, but if they really have had this particular fix in the pipeline for weeks, then no, they really really should have done an out of band update of a much smaller scope.

CNET News - Apple releases free OS X Mavericks

Facetime Audio on Mac! Heck yes, finally. How many malicious exploits have occurred in the last 5 days? I'm genuinely curious if anyone has a guess. Because iOS and Mac OS automatically connects to known WiFi hotspot names , it's possible to create a hotspot named the same as Starbucks's WiFi, even if you're nowhere near a Starbucks, and iOS will happily connect to it if possible unless other preferred networks around and it picks them over you. Also, a lot of smaller coffee shops will just set up a wifi router and give it a password and call it done, when many of them have inherent exploits.

On top of all that, there are lots of Asus routers out there running firmware that can be remotely exploited and for which there is no patch[1]. Or Linksys routers[2]. Or D-Link[3]. All an attacker needs to do is change your DNS server settings and they can send you to any server they want instead of the server that you expected.

On its own it's safe, but if you have control of someone's WiFi router which is apparently trivial then it's entirely possible for someone to snoop on a huge swath of your supposedly secure internet traffic. The only real saving grace here is that OS code signing hasn't been compromised, so the system won't install a backdoor'ed At least that part of the chain is secure and people can update. If the hotspot it tries to connect to does not have a matching PSK, it should fail the handshake and no, the client isn't disclosing the PSK to anyone during the handshake.

If any of this is not true, that would be another vulnerability. Anyone can hide his access point in a backpack and claim to be Starbucks. Or anything else - most people don't care, they'll hook themselves to just about anything. Yes, each other - if you successfully connect to an AP using a PSK, you can be pretty sure that AP knows the same key, and probably isn't someone impersonating it note however that anyone with the PSK can impersonate the access point.

Community Not Found

Remember kids, when you connect to a network without a PSK or more elaborate authentication where you verify the identity of the AP, you generally have no idea who is operating that network. What stops someone from doing that anyway, with their own hot spot, and just serving a self-signed certificate? Will the browser remember the old certificate, and put up the warning?

A self-signed certificate will throw an error in the browser because the certificate chain isn't trusted even if you have the appropriate key. In SSL, you have the certificate and the key; the key is private and secret, and the certificate is public. A public certificate which is wrong e. I may be wrong but is part of WebKit and not just Safari? In which case this isn't solely apple. It's part of neither. Zhenya on Feb 26, What does it mean to "steam your online account"? It makes the first order problem of using iPods to transmit confidential medical records feel rather trivial.

Yet there is nothing for which you need ask forgiveness. A model of the world where everyone lives in circumstances where either Comcast or Verizon is always available for one's internet connection [and it goes without saying that neither could possibly be compromised] is so absurd that you can only be speaking tongue in cheek Well played. This seems like the key to me.

Seems naive to think that your trusted ISP is the only person you'll ever get internet access through. Seems naive to think that your ISP can be trusted, since they can be compelled by law and sworn to secrecy. Easy to understand definitely drives the reporting. Compare the reporting of this to the Chrome vulnerability in TLS patched yesterday That's actually a quite easy to understand problem as well.

The difference is pretty major. It's believable that someone forgot to consider the case where a new certificate was negotiated. It's downright inconceivable that no one tested whether a bad certificate failed validation.

That's like selling a pregnancy test without testing what happens if the person isn't pregnant. But if you are tricked to go to bankofamericaa. This is true with or without this bug. The difference is this bug will grant you the lock icon, and your browser will "guarantee" you're speaking to the real bankofamerica. Practically speaking, that probably doesn't matter, because someone who understands that won't click on an email and log in to bankofamericaa. But there is a difference. It is very easy to get a lock icon on bankofamericaa. What makes this bug interesting is you can get a lock icon for a fake website on bankofamerica.

Without this bug, they wouldn't be able to use BofA's own certificate to do it. Why does this matter? The browser isn't even at bankofamerica. No browser would notice, even with the fanciest watchdog services and certificate pinning, that the certificate of an unrelated website is "authentic" or not. The only way you are going to notice the name being wrong is if the user opens the certificate details dialog and reads the content; do you seriously think someone is going to do this and not look at the URL ;P? From my experience, people really do pay attention to EV certs "the green bar" , so I'm not sure it's quite as simple as you're putting it.

Well yeah, but the point is there's nothing stopping the attackers from putting a valid certificate on bankofamericaa. Yes, you're right. They can't use BofA's own certificate anyway, because the domain doesn't match. Think "outside the box" of the US. Not all Mac users have "trustworthy" ISPs or are on trusted networks. I believe that the recent NSA revelations are the primary reason this bug is so concerning. Then you are doing something wrong.

I managed thousands of machines across thousands of sites and have no come across an error like this. I have alternatively set the proxy as the last step of the munki packages, after all the software has been installed, prior to deploying the machine to a user. You are commenting using your WordPress. You are commenting using your Google account. You are commenting using your Twitter account. You are commenting using your Facebook account. Notify me of new comments via email. Notify me of new posts via email. Issue: What I was experiencing was that during the munki run, installation of some software, most notably Microsoft Office updates would stall — stuck at the preparing stage.

macOS Sierra - GM tested?

So what is ocspd? It is used by Security. How long has this been an issue? Option 1 is not possible for us, so I needed a way to change these settings from Best attempt to Off After a lot of digging I was able to locate the required keys and set them via defaults as part of a first boot script that occurs before anything else in my imaging workflow. Share this: Twitter Facebook. Like this: Like Loading Would be good to understand that full context please.

Hope that helps a bit Like Like. Leave a Reply Cancel reply Enter your comment here Fill in your details below or click an icon to log in:.


  1. Much more than documents.!
  2. mac os x leopard dual boot windows 7!
  3. Avast consuming vast amounts of data.
  4. After updating to OS X 10.9 symdaemon using 90%+ CPU.
  5. 6 comments.
  6. how to check cpu and memory usage in mac!
  7. sony dpp-fp90 driver for mac.
  8. Email required Address never made public. Stack Overflow for Teams is a private, secure spot for you and your coworkers to find and share information. Xcode is so slow the coloured wheel spins as soon as I try to scroll down on a code page, or when I try to edit the code. It is impossible to work with it. I have re-intalled both Mavericks and Xcode but there is no change in performance. How can I solve this terrible and frustrating issue? Processes: total, 2 running, 3 stuck, sleeping, threads Load Avg: 1.

    MemRegions: total, M resident, M private, M shared. PhysMem: M used M wired , 10G unused. VM: G vsize, M framework vsize, 0 0 swapins, 0 0 swapouts. We now integrate with Microsoft Teams, helping you to connect your internal knowledge base with your chat. Learn more.