The Usenet is a huge worldwide collection of discussion groups. Each discussion group has a name, e.g. alt.checkmate, and a collection of messages. These messages, usually called articles, are posted by readers like you and me who have access to Usenet servers, and are then stored on the hundreds of Usenet servers around the world.
This ability to both read and write into a Usenet newsgroup makes the Usenet very different from the bulk of what people today call ``the Internet.'' The Internet has become a colloquial term to refer to the World Wide Web, and the Web is (largely) read-only. There are online discussion groups with Web interfaces (like Reddit), and there are mailing lists, but Usenet is probably more convenient than either of these for most large discussion communities. This is because the articles get replicated to your local Usenet server, thus allowing you to read and post articles without accessing the global Internet, something which is of great value for those with slow Internet links. Usenet articles also conserve bandwidth because they do not come and sit in each member's mailbox, unlike email based mailing lists. This way, twenty members of a mailing list in one office will have twenty copies of each message copied to their mailboxes. However, with a Usenet discussion group and a local Usenet server, there's just one copy of each article, and it does not fill up anyone's mailbox.
Another nice feature of having your own local Usenet server is that articles stay on the server even after you've read them. You can't accidentally delete a Usenet articles the way you can delete a message from your mailbox. This way, a Usenet server is an excellent way to archive articles of a group discussion on a local server without placing the onus of archiving on any group member. This makes local Usenet servers very valuable as archives of internal discussion messages within corporate Intranets, provided the article expiry configuration of the Usenet server software has been set up for sufficiently long expiry periods.
Usenet is also more resistant to censorship. Centralized websites like Reddit can ban entire communities, like what happened in 2020 with /r/The_Donald. They ended up having to create their own web forum, thedonald.win. Events like this show the strength of decentralized networks like Usenet in the face of increased political censorship. If one Usenet server decides to censor, it won't effect the other servers. This gives the users ultimate freedom.
Networks like Usenet include Diaspora (decentralized Facebook), Mastodon (decentralized Twitter), and BitTorrent (decentralized binaries).
Usenet news works by the reader first opening up a Usenet news program, which in today's GUI world will highly likely be something like Windows Live Mail, Forte Agent, or Unison (Mac) There are a lot of proven, well-designed character-based Usenet news readers. We will have a section on our site dedicated to these type of software. The reader then selects a Usenet newsgroup from the hundreds or thousands of newsgroups which are hosted by her local server, and accesses all unread articles. These articles are displayed to her. She can then decide to respond to some of them.
When the reader writes an article, either in response to an existing one or as a start of a brand-new thread of discussion, her software posts this article to the Usenet server. The article contains a list of newsgroups into which it is to be posted. Once it is accepted by the server, it becomes available for other users to read and respond to. The article is automatically expired or deleted by the server from its internal archives based on expiry policies set in its software; the author of the article usually can do little or nothing to control the expiry of her articles. The amount of time a server retains an article is called retention.
A Usenet server rarely works on its own. It forms a part of a collection of servers, which automatically exchange articles with each other. The flow of articles from one server to another is called a newsfeed. In a simplistic case, one can imagine a worldwide network of servers, all configured to replicate articles with each other, busily passing along copies across the network as soon as one of them receives a new articles posted by a human reader. This replication is done by powerful and fault-tolerant processes, and gives the Usenet network its power. Your local Usenet server literally has a copy of all current articles in all relevant newsgroups.
In order to handle a full Usenet feed (text+binaries), you will need specialised networks, very high end servers, and (especially) large disk arrays (RAIDS) are required for handling such Usenet volumes. Those setups are called ``carrier-class'' Usenet servers, and will be discussed a bit later on in this HOWTO. Administering such an array of hardware may not be the job of the new Usenet administrator. However, as of 2020, anyone can setup a text only Usenet server with a 1 TB hard drive, a small server, and some bandwidth.
Nevertheless, it may be interesting to understand what volumes we are talking about. Usenet news article volumes have been doubling every fourteen months or so, going by what we hear in comments from carrier class Usenet administrators. In the beginning of 1997, this volume was 1.2 GB of articles a day. In 2020, it is now about 10-20 TB of data a day. The volume of Usenet feeds has multiplied 10,000 times! Add to the fact you need to pass on outgoing articles to increase ranking in Top1000 and maintain your peering relationship and send articles to your customers/users, you may be pushing/receiving up to 100tb a day of data! Going through a IP transit provider like Zayo (Telia for Europe) will be extremely expensive, so most Usenet full feed peering is done at internet exchanges at AMS-IX and De-CIX (Caputo has the SIX). Handling a full feed is only possible with alot of startup capital to fund all of these things. However, it is not impossible to start a complete Usenet server. Cedric from Newsoo was able to create his own software and NNTP server which was ahead of the competition before it was shut down.
On the meanwhile, a text feed can run under just 50-500 mb a day as of 2020. With the increasing rise of Usenet due to censorship, this may change soon.
Then there's the internal Usenet service. By internal here, we mean a private set of Usenet newsgroups, not a private computer network. Every company or university which runs a Usenet news service creates its own hierarchy of internal newsgroups, whose articles never leave the campus or office, and which therefore do not consume Internet bandwidth. These newsgroups are often the ones most hotly accessed, and will carry more internally generated traffic than all the ``public'' newsgroups you may subscribe to, within your organisation. After all, how often does a guy have something to say which is relevant to the world at large, unless he's discussing a globally relevant topic like ``Unix rules!''? If such internal newsgroups are the focus of your Usenet servers, then you may find that fairly modest hardware and Internet bandwidth will suffice, depending on the size of your organisation.
The new Usenet server administrator has to undertake a sizing exercise to ensure that he does not bite off more than he, or his network resources, can chew. We hope we have provided sufficient information for him to get started with the right questions.