How do you view a webpage without a browser in a console?

How do you view a webpage without a browser in a console?


  • Total voters
    11
If I recall correctly you're fond of Perl, www/p5-libwww is useful. Besides adding some useful Perl modules it also comes with a couple of command line utilities GET(1) and HEAD(1) for example.
 
Zirias No. He asks "How do you view a webpage..." Looking at or downloading the markup using curl or telnet is not viewing a web page. I take that to mean NOT just wanting to look at the source markup.
I think I should said "read" or "fetch" instead of "view".

If I recall correctly you're fond of Perl, www/p5-libwww is useful. Besides adding some useful Perl modules it also comes with a couple of command line utilities GET(1) and HEAD(1) for example.

If perl would be an option then LWP::Simple module from the libwww-perl-6.58 works great.
 
No.

Please look up the definition of a "representation". In a nutshell, a representation is a format in which the data is presented or transported. There are human-readable ones and non human-readable ones. HTML clearly belongs to the first category (although you can obfuscate it like crazy...)
 
Zirias Please look up the definition of "view" which is what he asked for. However, now he says he meant "fetch" and not "view".

One views a web page through a browser or software that interprets the supplied markup. Few have any reason to look at that markup by downloading it.
 
"View" does not imply a specific representation.

What was meant here was easy to deduce from the given options.
 
I've made a few 'web scrapers' for work. Needed to download some specific software, and it wasn't available in a 'regular' repository. So I had to scan the web pages for a specific link to a downloadable file. As long as nothing major changes on that particular page the downloader does what it's supposed to do. Used a fairly basic shell script for that to wget(1) the page, parse it somewhat with grep and then fire off another wget(1) to download the latest version of that software.

Now I've used wget(1) in that case because that's what was available to me. On FreeBSD I would probably just use fetch(1) for this.
 
To get the content with JavaScript inside a page, you could use node.
To inspect the content, you can use a test library like mocha. Export what you find into a readable format, you can use Babel and Istanbul (nyc, you see the logic ?)

After that you have more libs, dependencies and unknown code than the crappiest browser and you still have no clear view of the webpage...
 
When i as a European look at U.S. pages i first need to agree on the applicable law, before i can even see the first page. This is interactive ...
Not all internet pages are as simple as freshports .
Same for me. And I do not know why... I mean, I can understand situation when the viewer discretion is advised and there is a question "are you 18?" However, what about situations when I just want to watch another season of Chernobyl on HBO and no, I am not an old Soviet Spy :D

God damn it, I think sometimes we have a lot of things to agree with.

P.S. I am from AU
 
I've made a few 'web scrapers' for work. Needed to download some specific software, and it wasn't available in a 'regular' repository. So I had to scan the web pages for a specific link to a downloadable file. As long as nothing major changes on that particular page the downloader does what it's supposed to do. Used a fairly basic shell script for that to wget(1) the page, parse it somewhat with grep and then fire off another wget(1) to download the latest version of that software.

Now I've used wget(1) in that case because that's what was available to me. On FreeBSD I would probably just use fetch(1) for this.
I too have needed to make "web scrapers" for work and used a combination of wget(1), fetch(1), lynx and w3m. From memory (it was few years ago since I've retired) wget was preferred over fetch when I needed to "save state" so as to be able to retrieve images from some pages.
 
When i as a European look at U.S. pages i first need to agree on the applicable law, before i can even see the first page. This is interactive ...
Not all internet pages are as simple as freshports .
That never happened to me. I was able to view espn.com (a Las Vegas-based site, BTW, even with offices in Connecticut (East Coast US)) just fine. Well, that info is from 2005, which is when I was in EU last time. REALLY need to go back at some point, but there's a LOT of ducks to get in a row for that to happen.
 
Back
Top