Prime numbers database

From 2 to 2^32.

Search a number  



Wireless electricity by resonant magnetic coupling

Work abandonned -- Last modification: Sep 22 2009

This page aims to gather all relevant information that can be found on the Web in order to explain how wireless power transfer using magnetic resonance can be created easily.


Nikola Tesla's works, more than a century ago, proved that it was possible to have a wireless transmission of electricity. He came out with a huge emitting tower, but it was destroyed before he could realize his plans of providing free electricity anywhere in the world (which was not good for money profit).

In 2006, MIT researchers and Intel worked again on this topic and realized a few mid-range prototypes. Intel's is called WREL, for Wireless resonant energy link (video). MIT research is led Marin Soljacic and was called WiTricity. His webpage contains research papers, for theory and experiments. MIT project has now created the WiTricity Corp., and wireless power market products are announced for 2011.

It's very hard to find (I did not find any actually) information on the Web about how to build a wireless power transmission system and calculate values of electronics components and properties for the system you want to build, in a non-complicated-equations form. In this article, we will study the theory paper in order to design easily wireless transmission systems based on resonant magnetic coupling.

Three kinds of wireless power transmission can be identified: short-range, based on induction coupling, mid-range, based on resonant magnetic coupling, which is an enhanced induction coupling, and long-range transmission, based on electromagnetic waves, such as microwaves. To understand the difference between those three kinds, you can read that introduction to wireless power and that one for details on induction wireless power.

Resonant magnetic coupling

By using magnetic resonance between coils, the transmission efficiency is highly raised over simple induction coupling. Higher frequencies are used, and all objects that are resonant to the same frequency and in the field are able to be powered wirelessly. It does not require a direct line-of-sight between two coupled devices, and they don't have to be aligned in parallel.

In august 2009, 95% efficiency transfer over 40cm and 90% over 60cm have been demonstrated.

Designing a RMC system

1. The oscillator

MIT project used a Colpitts oscillator.

2. The resonant coils

How to build identical coils and determine their resonant frequency, or how build coils for a specific resonant frequency when you already have the oscillator circuit? Resonant frequency is given by 1/2.pi.sqrt(LC) (see also other formulas).

Open questions

Is it safe regarding magnetic storage devices, like hard disk drives?

External resources

Commented article on WiTricity, with good information on the design of Soljacic's device.

wapedia articles on resonant energy transfer and quantum tunnelling.

MIT teachings videos. Very interesting and easy to understand. Here is the lecture on LRC circuits.

1 comment

Using SASL authentication with postfix SMTPD

Configuring postfix for SASL authentication can be quite tricky, especially with virtual aliases domains. Fortunately, the documentation is quite complete and help can be found in a few other places.

This article reviews the configuration for the sasldb plugin of libsasl, integrated in postfix built with Cyrus, as found in modern OS such as Debian 8 in a virtual domain set-up. sasldb is the simplest authentication method for postfix: it stores usernames and passwords in a database file and requires no external authentication service. Although simple and documented, this approch remains quite unclear on some points, or even wrongly and contradictorily documented and prone to error. The following sections will tackle all issues that were met, and solved in this serverfault question.

The use of saslauthd

It is not needed to install or start saslauthd in order to use the simple sasldb authentication database. Support for this authentication method is built in libsasl. A simple mention of the method in the smtpd.conf file is enough.

The smtpd.conf file

What the hell is this file, and where should it be put? The first answer is found in the README, the file is the data exchange format between postfix smtpd and libsasl. I found at least three locations for it, with no clear indication of where it should go. Serverfault was helpful: there is an extra configuration variable in postfix that never appears in tutorials, that defines the path for the file: cyrus_sasl_config_path.

Creating the sasldb

The sasldb file is created with the saslpasswd2 command. The realm argument is particularly unclear, especially the way postfix specifies it in the config file with smtpd_sasl_local_domain. The content of that variable will be the default domain users will authenticate to. For example, if set to "w.tf", a user john trying to authenticate with the john username will be actually checked as john@w.tf the sasldb. If you only manage one domain, using this option will facilitate login for users. Otherwise, leaving the variable empty will require users to provide the exact value of sasldb usernames, in our example "john@w.tf".

The -u argument of saslpasswd2 is optional. The commands saslpasswd2 -u w.tf john and saslpasswd2 john@w.tf are equivalent. With these two, programs will have to provide the entire username when authenticating, in our case postfix can be providing the domain part depending or not if smtpd_sasl_local_domain is configured.

Authentication failures

Several causes for authentiation failures were met in the first configuration try. First, the chroot, as mentioned in some forum threads. Postfix smtpd runs by default in chroot. Consequently, the sasldb file must be placed inside the chroot or the smtpd has to be run without the chroot. Disabling the chroot is easily done in the master.cf configuration file. I would recommend using the chroot and moving the sasldb in the chroot anyway, for security reasons. For convenience, a symlink can be done in /etc/sasldb2 in order to have the basic sasl commands still working, with default chroot path: ln -sf /var/spool/postfix/etc/sasldb2 /etc/.

The most troubling issue for me was when the system was actually working but the test was not. The official documentation presents various ways of testing the AUTH PLAIN command, using perl or shell base64 string construction. These methods didn't work for me, only the following command did: gen-auth plain. On authentication failure with the other methods, these warnings were appearing in the mail error log:

warning: SASL authentication failure: Can only find author/en (no password)
warning: localhost[]: SASL PLAIN authentication failed: bad protocol / cancel

myorigin, mydestination and friends

Configuring myorigin, myhostname, mydestination and mydomain is a bit tricky too. From the serverfault replies, I made some tries and I think using one of the actual hostnames of the machine as myorigin and myhostname is the correct way to proceed. It may have two drawbacks however: the outbound emails will appear in headers as sent by your user@the.real.hostname.of.the.machine instead of user@the.mail.domain and this real hostname must not be in the list of virtual domains managed.

The problem with the latter is that myorigin is appended automatically to local users with no domain in the virtual alias maps, so you wouldn't be able to deliver to local users who have UNIX accounts if the content of myorigin wasn't also set as one of mydestination. And you can't have a virtual domain in mydestination. Also, if you don't have a real hostname that is not also present in the list of virtual domains, and just use localhost instead, some remote servers could reject your emails. And it can be a concern to have the most neutral hostname possible, because when you are managing at least two domains, you don't really want the emails from domain1 to appear as sent by a machine of domain2. So what I did was to use the default reverse DNS for the server, provided by the ISP/hosting service, which also may not be always possible.

0 comment

Mirror Wikipedia on your own computer

Work in progress - Last modification: Nov 28, 2019

With the end of the world coming up, it'll be handy to have a local mirror of wikipedia. The whole database is a bit big to manage, but keeping only the current version of the pages makes it manageable. For example, in October 2019, the English Wikipedia pages (text only, no media) are dumped in 70GB of XML, for about 6 million articles. Using the pages-articles dump, which feature all articles with no history and talk pages, there are in fact 19 million pages to import (with templates, redirects, media descriptions...).

Some software can use these XML dumps and present them in a tailored browser, see the offline wikipedia readers section. It's certainly easier to install, and also come with the pages media for some of them, but it's not as fun as having a real editable wiki. It's also not easy to find a software that works on ARM processor, because it would be nice to have this running on the low power Raspberry Pi 4 computer. It seems kiwix can make a wifi hotspot that offers a static version of wikipedia: see the doc.

It's not really easy to mirror Wikipedia: there's not much recent documentation on this, and having a website similar to what Wikipedia looks like requires using the same version of mediawiki and all its extensions (more than 100). The size of the data makes it hard to complete and it's also complicated to get the media (images and films in pages). Here's a recent update on what works and what doesn't.

  1. Download the XML dumps here: https://dumps.wikimedia.org/backup-index.html.
  2. Install mediawiki from git: https://www.mediawiki.org/wiki/Download_from_Git#Fetch_external_libraries.
  3. Import the XML dumps in your database. The documentation about that (https://www.mediawiki.org/wiki/Manual:Importing_XML_dumps) is quite old, and the only method that seems to be working in 2019 is the one that is not recommended for this task of importing a lot of data, the maintenance/importDump.php script.

0 comment