Browser encryption of DNS

As a term it has become tarnished since people misuse it to defend a particular point of view. If professionals use xyz, then xyz is fine for most people.
You can short circuit such nonsense by pointing out that professionals drive cars with 4-digit hp in rain and without any fancy electronics. Something joe sixpack not even wants to try out.
 
As an example, consider the thing on my lap. It is a high-end 15" MacBook pro (paid for by my employer). I am running three applications right now: A browser (which does most of the work, including both home and work e-mail), an ssh client (which I use to log in to computers), and a VNC client (for working on a stationary Mac at home, which I use for scanning documents). So other than 3 or 4 fixed host, nearly all the network traffic goes to/from the browser.

While I would never question your computer expertise somehow that doesn't sound like a good thing to me. I realize SSH is encrypted traffic but having it routed though www/firefox-esr after everything I do to quash its tracking and spying eyes gives me a bad feeling.

That said, I don't even allow myself remote access and have never used a VNC but it seems to go against what I personally consider good security practice.

Feel free to correct me if I'm wrong. :p
 
Have you tried using Cloud accounts like Azure or Google? All your documents are online (and when I say "documents", I don't mean just word files, but databases, programs, queries, make file, spreadsheets, e-mails), and are all searchable.

I think this is the core of this issue; it is too hard to get documents between different cloud accounts. If you wanted to get your Google docs files into an Office 365 or web ssh session; it is too awkward. You end up having to download it to the local machine and then re-upload it again.

And I suppose then that there is a web ssh, but you have to run it yourself; there is no "cloud" provider for it that integrates with other cloud services; like say a traditional PC approach where it is all on the same disk and easy to access.
 
That said, I don't even allow myself remote access and have never used a VNC but it seems to go against what I personally consider good security practice.
That's a darn good question, and I had to think about it for a while. Why is this not insanely insecure?

The answer is: That desktop machine is on the *internal* network. While connections that originate from it to the outside world are allowed, it can not be seen from the internet at large. As in: it doesn't even have an IP address that's routable to the world.

In addition, the VNC connection is password-protected. Which currently annoys me, I have to type in that password (a pretty random string of about 12 or 15 characters) every time I start the VNC connection. Clearly pretty annoying, but not so much that I have put any effort into working around it (I only use that desktop machine roughly once per week, on a weekend, for a few hours). And I think VNC connections between mac's are encrypted, so even if someone were to listen to network traffic (which is de-facto impossible in our network setup and geographic location, unless they are in a helicopter), I think I'm good to go.

So I think it's actually pretty secure.

(The helicopter part is actually not a joke: We live in California, very near Silicon Valley, but in a rural and mountainous area. This afternoon there was a very small wildland fire near our house, and there were two helicopters right above. As in 100 feet outside the bedroom window. This is a rare case when an outsider can actually even get to our WiFi signal; our house is so isolated that without a helicopter you have to be literally sitting inside or on the veranda to get signal. The fire was extinguished within minutes, nothing to worry about.)
 
This gives me quite a bit to think about. The problem that I have is does the browser now handle DNS traffic itself or does it still defer to the resolver? From everything that I have read, DoH is handled by the browser. That raises a number of red flags for me because how do we know that the browser can be trusted to use our DNS servers that we have configured? I'm all for added privacy on the web, but DNS is supposed to be handled by the OS, not an application, certain specific tools exempted.

To add to some specific comments, a ChromeBook does have an OS, iOS is an OS, and other things. The browser may be one of the only apps running on the machine, but the underlying software running the hardware is the operating system whether that be Android, Linux, FreeBSD, iOS, etc.... That is still required.

As for computer professionals, I am a computer professional and I use a traditional desktop running Windows for my general work. FreeBSD is used for servers and such. Linux for embedded. The right tool for the job. I've been doing web development these past few months, so I have Apache, PHP, and MySQL loaded on my Windows machine.

Back to the topic at hand, what should be happening is that the browser uses the local resolver as is, and the resolver uses DoH to connect to a DNS server....Or have a local DNS server use DoH to communicate with the outside world in a corporate environment.
 
what should be happening is that the browser uses the local resolver as is, and the resolver uses DoH to connect to a DNS server....Or have a local DNS server use DoH to communicate with the outside world in a corporate environment.

While I agree with you complete it defeats Google's desire to do even more tracking of your activities on the Internet. If Chrome does it's own resolving and bypasses your OS settings, then they know when you go to sites that don't otherwise have any connection with Google (like analytics or ads). That is information that the big ISPs do track and sell, Google wants to take it for themselves.

The other side of the argument is that DNS has had the ability to do DNS over SSL for quite a while now but very few DNS servers actually implement it. So, even if you (and I) change our default resolve to something not from our ISP, our ISP can still sniff the traffic as it goes through their network to track us. Google will say that they are helping protect our privacy by preventing the ISP from collecting that data and selling.

And, of course, Google's solution is going to hurt corporations that use DNS to blackhole malicious domains. And it is also going to hurt those of us who do the same thing at home. While there are ways around DNS black holes, they generally require the end user to intentionally work around DNS. That makes DNS black holes good enough for a lot of situations. But if my browser is bypassing my DNS, then all of a sudden those known to be malicious malware servers that sneak into ad networks are bypassing my DNS servers thanks to Google.
 
Google will say that they provide a service by centralizing the blocking of dissident ideas ^h^h evil guys so everybody will be safer.

The computer is your friend. Trust the computer.
 
Granted, Google is evil. However, according to reports, with respect to DoH quite a lot less than Mozilla. I don’t use Chrome, regularly, although, what I read is that with Chrome it is very easy to opt-out of DoH, and this even by the way of a company wide policy. And this makes DoH only annoying, because somebody needs pull the plug, but not exactly evil.

https://www.translatetheweb.com/?fr...-testweise-auf-DNS-over-HTTPS-um-4520039.html

While with Firefox it is hidden with misleading settings by default, and Mozilla recklessly wants to push this through at all costs. I am a web developer as well, and I use Chrome and Firefox for testing purposes only. I have a test web server installed on localhost on the development machine, and my local DNS resolves the test virtual hosts to localhost. I found some obfuscated settings in FF which presumably disables DoH, and I disabled it. However, in the first occasion, when Firefox does not more resolve my virtual host sites to localhost, I won’t search for other settings or do any troubleshooting with it, I will simply shoot the f**ing fox off my systems -- once and forever(fullstop)
 
What I don't like is relinquishing control of DNS to the browser, basically giving Chrome or FF ultimate say on what can and can not be filtered.

To me DoH looks like a strategy to block content filters under a smokescreen of improving privacy. Based on the fact Google makes a living off collecting personal data, I don't see how anyone could say that's a believable motivation. There's also the fact that Google is already removing controls in Chrome that allow content filters to work as browser extensions. Sorry I don't recall the technical details on that. My feeling is it's an all out blitz on Google's part to remove the users' ability to filter content on the web and further their profit potential.

I just hope controls remain to disable DoH. It's bad when corporations use their market share to force their policies down your throat and worse when there's no way around it.

As far as DoT, seems fine to me if it's supported by the traditional resolver and integrated into the DNS standard. That does seem like something that could be truly aimed at improving security and privacy.
 
What I don't like is relinquishing control of DNS to the browser, basically giving Chrome or FF ultimate say on what can and can not be filtered.

IDK about chromium but on FF you can choose what DoH proxy you want to use, including your own.
 
While I agree with you complete it defeats Google's desire to do even more tracking of your activities on the Internet. If Chrome does it's own resolving and bypasses your OS settings, then they know when you go to sites that don't otherwise have any connection with Google (like analytics or ads). That is information that the big ISPs do track and sell, Google wants to take it for themselves.

The other side of the argument is that DNS has had the ability to do DNS over SSL for quite a while now but very few DNS servers actually implement it. So, even if you (and I) change our default resolve to something not from our ISP, our ISP can still sniff the traffic as it goes through their network to track us. Google will say that they are helping protect our privacy by preventing the ISP from collecting that data and selling.

And, of course, Google's solution is going to hurt corporations that use DNS to blackhole malicious domains. And it is also going to hurt those of us who do the same thing at home. While there are ways around DNS black holes, they generally require the end user to intentionally work around DNS. That makes DNS black holes good enough for a lot of situations. But if my browser is bypassing my DNS, then all of a sudden those known to be malicious malware servers that sneak into ad networks are bypassing my DNS servers thanks to Google.

My solution is to blacklist Google's DNS servers on my firewall, which cannot be bypassed. The IPFW rule to do this was posted earlier in the thread. So with that, Chrome has no choice but to use my configured DNS servers, which I happen to run myself.
 
You're assuming though that google will always run DoH on well known servers like 8.8.8.8. The problem with DoH is that it could in theory run on anything. Cloudflare for example could run DoH endpoints on every one of its front end servers. If you then decide to block these you block half the internet from loading.

At least with DoT it uses a well known port and gives you that choice. DoH doesn't.
 
There's so many instances I encounter with corporate software products where they try to force the user into doing things the way they want. In this case it's commandeering your DNS to take control of content. I really get tired of all the non-standard things I have to do with products to make them behave the way I want. DoH is just another one of those things on an ever growing pile.

It's a trend I've noticed now with software products where they take more and more choice away from the user. It really extends into all consumer products. Quality and support seems to be on a steady decline as well. It's all about making products as cheap as possible with the lowest overhead.

I remember a time where they tried to establish TQM as a way of doing things in the corporate world, don't know if anyone remembers that. The objective was to maximize quality. That idea has been run out of town like a rail over the last couple decades.
 
I remember a time where they tried to establish TQM as a way of doing things in the corporate world, don't know if anyone remembers that.
Absolutely, been there done that. Total Quality Management, Six Sigma, all that. It was horrible, and it was great. That might sound contradictory, but there is an explanation. The idea behind it came from the observation that quality (in particular of software artifacts) was getting horribly bad, much software was chock full of bugs or completely missed the requirements, and fixing and improving it was hard and expensive, sometimes so much that it was outright impossible. Many famous software projects of the 80s died a terrible death due to these problems.

And then people figured out the key observation: the real root cause of software quality problems is not a simple technical thing. You can't fix software quality with technology; new coding rules (like where you put the braces in C code, or how many spaces you indent), or new programming languages help a little bit, but they don't solve the problem. Giving people a more efficient programming language (like Cobol -> Pascal, or C -> Java -> Python) only makes them get to unmaintainable software that's over budget and behind schedule even faster. The real root cause of the engineering crisis is sociological, and it is corporate culture. That's what TQM and such set out to fix. In order to have better quality (deliver artifacts that actually work, on time and on budget), you need to first define what you really want (what is the software supposed to accomplish? meaning write a requirements document), you need to measure how well you are doing (are we behind schedule or ahead? what fraction of projects fail?), you need to change your behavior (let's see whether coding goes faster if we turn the phone system off), and you need a feedback system (the elephant project worked really well, let's use the same design method for hippo and rhino). This is what TQM taught us. Engineers hated it, because suddenly you had psychologists, sociologists and bean counters telling them what to do. But it worked.

And it didn't go away. Instead, it became part of the culture. The direct outcome of it was the CMM a.k.a. Capability Maturity Model, and all that still underlies the software development processes that we use today.
 
Absolutely, been there done that. Total Quality Management, Six Sigma, all that. It was horrible, and it was great. That might sound contradictory, but there is an explanation. The idea behind it came from the observation that quality (in particular of software artifacts) was getting horribly bad, much software was chock full of bugs or completely missed the requirements, and fixing and improving it was hard and expensive, sometimes so much that it was outright impossible. Many famous software projects of the 80s died a terrible death due to these problems.

And then people figured out the key observation: the real root cause of software quality problems is not a simple technical thing. You can't fix software quality with technology; new coding rules (like where you put the braces in C code, or how many spaces you indent), or new programming languages help a little bit, but they don't solve the problem. Giving people a more efficient programming language (like Cobol -> Pascal, or C -> Java -> Python) only makes them get to unmaintainable software that's over budget and behind schedule even faster. The real root cause of the engineering crisis is sociological, and it is corporate culture. That's what TQM and such set out to fix. In order to have better quality (deliver artifacts that actually work, on time and on budget), you need to first define what you really want (what is the software supposed to accomplish? meaning write a requirements document), you need to measure how well you are doing (are we behind schedule or ahead? what fraction of projects fail?), you need to change your behavior (let's see whether coding goes faster if we turn the phone system off), and you need a feedback system (the elephant project worked really well, let's use the same design method for hippo and rhino). This is what TQM taught us. Engineers hated it, because suddenly you had psychologists, sociologists and bean counters telling them what to do. But it worked.

And it didn't go away. Instead, it became part of the culture. The direct outcome of it was the CMM a.k.a. Capability Maturity Model, and all that still underlies the software development processes that we use today.

This sound like the Ada "way" to me. :what:

 
And it didn't go away. Instead, it became part of the culture.

I was more making a joke about it rather than being technically accurate, but TQM actually extended over the whole of industry in the US. Somebody was really successful at promoting an idea.

At the time I experienced TQM I was an Avionics engineer working for a large corporation designing flight control systems for commercial aircraft. I didn't stay in that profession long enough to see what became of it there, but I did still see it in my next job for a large corporation which revolved around systems design. I don't believe TQM extended beyond the US, it might have, but I never saw it.

I'm sure Asian industry (which owns the most of the industrial pie now) has never heard of TQM and will never subscribe to such a thing. It seems in foreign industry quality is barely part of the equation.
 
Boeings previous CEO was in line to take Jack Welch's job at GE.
Take a look at Boeing now and think what SixSigma did to that company.
They went from engineers running the company to bean-counters running the show.
Hence total meltdown for a few pennies saved.
This is the current state of affairs at many companies.
Apparently at business school they don't teach what harm to a companies reputation actually costs.
 
Well based on the general state of industry in the US (dismal), there was definitely something wrong, though I don't think TQM had much influence in its success or failure. There were other more pertinent factors that killed industry in the USA (mainly the drive to cut labor costs).
 
What are your thoughts on the browser handling DNS?
Mozilla using Cloudflare. Google with Chrome.

It is a very very very bad idea.

For starters, indeed, it should be an OS function.

Then, it doesnt solve any problem we have - but it creates a ton of new ones.

Basically, the aim is not security but to move your DNS stream from your provider to Google and Cloudflare.
This is bad on many fronts, but lets start with the worst: On the internet, if you are not the paying customer, you are the product.

Remember that well. Many of you dislike your ISP (full disclosure - I work for one) , but it is a company that operates in the same country you do, and works by the same laws. Ideally, you have some political input to the legal framework it works under. And you are a paying customer, you have a contract.

Nothing of that holds true for Google or Cloudflare. You have no legal relation to them whatsoever, to most of the world's population they are foreign thugs, and they are completely unregulated. And they don't get a penny from you - making you just a filet piece in their offering to actually paying customers. Another eternal truth is that There Ain't No Such Thing As A Free Lunch. These companies need to make money - so the HAVE to sell you out, to someone, somehow.

My second objection is environmental. Basically, introducing cryptography where none is needed burns energy. And since that is large scale we can expect large scale cpu power needed to implement that shit. Even if we ignore client side, server side lets assume by very basic back-of-a-napkin calculation we currently need
about 2mW/user to provide DNS service (based on real world data, probably a little low). With DoH I expect that to increase by a factor 4 to five. Extrapolated to 4 billion internet users, this gets us something like 30 MW additional electrical power needed. That is the power requirement of a small town. This is not quite yet Satoshi-Nakamoto-sized bad, but still Google's brain fart visibly increases world power usage for no gain in a time we desperately need to reduce it, not to mention the thousands of additional servers that have to be built, transported, installed, de-installed, transported and scrapped every five to seven years for this alone.

My third objection may seem strange to you. Right now, in many countries DNS provides the angle to implement Internet censorship on behalf of those institutions that are powerful enough to force it. It is comparatively cheap and the obstacle it establishes is high enough to make it acceptable to those requiring it but low enough that most of us can live with it. It is foolish to assume that you can simply out-power those institutions. Most of us in the first world live one court decision away from something like the electronic Chinese wall implementing their local censorship regime. The widespread adoption of DoH might trigger this exact decision.

My fourth objection is quality. Google and Cloudflare sit somewhere, while we, the ISPs, sit directly where you connect. We can, and often do, implement better and lower-latency DNS service than Google and Cloudflare possibly could. Additionally, the cryptography inherent in DoH will probably cause additional, quite visible latency penalties. More DNS latency makes the Internet feel "slower" to you. Also the cryptography will increase complexity and, this, operational risk (leading to lower service availability) and increased attack surface for intruders, making DNS services more vulnerable and thus, also, less available and less trustworthy.
In sum, the DNS will be slower, and less trustworthy because of that, and it will fail more often.

And my fifth objection is auto-configuration and discovery. Google messes with a vast ecosystem of existing auto-configuration and discovery mechanisms. They barely have any answers to questions arising from that yet.

There probably are a few more, but that should suffice as an intro :)
 
Directly to your forth point I wanted to share this read from HN:
Us Americans are lucky to have speedy cloudflare responce time.
Not all the world is so lucky.
 
Back
Top