I worked with Phil on malicious firefox extensions – very briefly at SSTIC, in details on the lab’s blog, and in an unpublished short paper.
As some people asked, yes the issues have been reported to the Mozilla security team (thanks to JP Gaulier and Tristan Nitot). And the result is a bug report marked as invalid (which is normal, since what we wanted to communicate was not a bug report but rather design issues).
So basically the situation is: ActiveX is bad because there is absolutely no security policy. There is absolutely no security policy for Firefox extensions but it’s cool.
I’m out, I really need a double shot of espresso now.
7 thoughts on “Malicious Firefox Extensions – continued”
Pas de possibilité de rétroliens apparemment. Voici ma contribution :)
ActiveX controls and Firefox extensions are very different things.
ActiveX controls have no UI, expose native code to web pages, and are installed by users who encounter web sites that require them (or come preinstalled). IIRC, their security problems resulted mostly from the native code being buggy (e.g. containing buffer overflows) and from confusion over which sites were allowed to access which controls.
Firefox extensions generally expose features to users rather than to web sites, so most of them don’t even have the potential to contain security holes. Users start out with none and most users install none, because web sites don’t require them.
ActiveX also had a very insecure installation prompt — it was too complicated for users to read the whole thing and sites got to inject tons of confusing and misdirecting text into it. Firefox doesn’t even let sites (other than addons.mozilla.org) prompt to install extensions.
Hi Jesse, it’s nice to hear some constructive comments !
Sure, ActiveX and Firefox extensions are different animals. But in terms of functionality (what they can do) and how they get executed, they look pretty much the same for the malware writer.
In both cases, I have a way to embed malware in the browser, so that I can hook browser events (useful for form spying, phishing, adware and so on) and evade personal firewalls, therefore I’m not dealing with legitimate code that might contain vulnerabilities.
I don’t know the IE installation prompt, but if you want to install an extension which is not hosted on AMO such as XPCOMViewer, instead of having to click “yes”, you have to click “authorise” and then “yes”. It’s ok, I don’t think Firefox can do much more to warn users, the problem is that users *really* want to see the dancing bunnies.
And my point is: once they are in the place, the dancing bunnies can really pwn you.
Firefox can’t really protect you from malware already on your system by removing API hooks. That’s a losing game; if we start removing hooks, malware will shift to replacing/patching the Firefox’s own chrome JS or even the Firefox binary.
As for convincing users to install things, it’s probably still easier to convince users to download and run an .exe. I’d like Firefox’s .exe download UI to match its extension installation UI.
The best tool we have against the “dancing bunnies” problem is the Web itself: at least some users will question the need to install something just to be entertained.
> That’s a losing game; if we start removing hooks, malware will shift to replacing/patching the Firefox’s own chrome JS or even the Firefox binary.
According to Dave Townsend, some malware have already used this. I’m quoting him :
“What isn’t noted is that you don’t need the extension manager to “infect”
Firefox in this way. Assuming you have write access to the install directory you can just add stuff to the chrome and components directory to add code that can’t be seen or uninstalled through the extension manager. Malware has used this mechanism in the past.”
As for ActiveX exposing native code to web pages and not FF extensions, this is quite untrue. For instance the Cooliris extension includes native DLLs. And no specific user authorization is required to load and access these native libraries.
And as for malware that does not need to infect Firefox when already in the system, sure it could use some keylogger techniques or other old school methods. But thing is, it will just be so much easier for the malware to steal credentials, to bind on DOM events of specific elements, and so on, by loading a malicious extension in FF. And where the firewall is likely to block the malware attempts to send its stolen data (or warn the user about it), using Firefox to send it will go unnoticed.
The discussion basicaly orbits around whether Firefox should practice defensive coding and follow the principles of reduced attack surface and defence in depth (wow… what a concat of buzzw words)
My opinion is, yes it should and I completely agree with the view of the Dan et al.
The problem of the Bug report is the difference of what was intented to be reported and what was understood to be reported.
The point is not “if malware is on the system it’s game over anyways”. The point is the they propose to add a security framework around addons – signing addons and restricting access to what plugins can do or can’t do.
If firefox wants to be as malware resistant as in the past, the team should listen.