Pure-Python package to get the MAC address of network interfaces and hosts on the local network.
It provides a platform-independent interface to get the MAC addresses of:
System network interfaces (by interface name)
Remote hosts on the local network (by IPv4/IPv6 address or hostname)
It provides one function: get_mac_address()
Should you use this package?
If you only need the addresses of network interfaces, have a limited set of platforms to support, and are able to handle C-extension modules, then you should instead check out the excellent netifaces package by Alastair Houghton (@al45tair). It’s significantly faster (thanks to it being C-code) and has been around a long time and seen widespread usage. However, unfortunately it is no longer maintained as of May 2021, so it may not be a great choice for new projects. Another great option that fits these requirements is the well-known and battle-hardened psutil package by Giampaolo Rodola.
If the only system you need to run on is Linux, you can run as root, and C-extensions modules are fine, then you should instead check out the arpreq package by Sebastian Schrader. In some cases it can be significantly faster.
If you want to use psutil, scapy, or netifaces, I have examples of how to do so in a GitHub Gist.
fromgetmacimportget_mac_addresseth_mac=get_mac_address(interface="eth0")
win_mac=get_mac_address(interface="Ethernet 3")
ip_mac=get_mac_address(ip="192.168.0.1")
ip6_mac=get_mac_address(ip6="::1")
host_mac=get_mac_address(hostname="localhost")
updated_mac=get_mac_address(ip="10.0.0.1", network_request=True)
# Enable debuggingfromgetmacimportgetmacgetmac.DEBUG=2# DEBUG level 2print(getmac.get_mac_address(interface="Ethernet 3"))
# Change the UDP port used for updating the ARP table (UDP packet)fromgetmacimportgetmacgetmac.PORT=44444# Default is 55555print(getmac.get_mac_address(ip="192.168.0.1", network_request=True))
Terminal examples
Python 2 users: use getmac2 or python -m getmac instead of getmac.
getmac --help
getmac --version
# Invoking with no arguments will return MAC of the default interface
getmac
# Usage as a module
python3 -m getmac
# Interface names, IPv4/IPv6 addresses, or Hostnames can be specified
getmac --interface ens33
getmac --ip 192.168.0.1
getmac --ip6 ::1
getmac --hostname home.router
# Running as a Python module with shorthands for the arguments
python -m getmac -i 'Ethernet 4'
python -m getmac -4 192.168.0.1
python -m getmac -6 ::1
python -m getmac -n home.router
# Getting the MAC address of a remote host requires the ARP table to be populated.# By default, getmac will populate the table by sending a UDP packet to a high port on the host (defaults to 55555).# This can be disabled with --no-network-request, as shown here:
getmac --no-network-request --ip 192.168.0.1
python -m getmac --no-network-request -n home.router
# Enable output messages
getmac --verbose
# Debug levels can be specified with '-d'
getmac -v --debug
python -m getmac -v -d -i enp11s4
python -m getmac -v -dd -n home.router
# Change the UDP port used for populating the ARP table when getting the MAC of a remote host
getmac --ip 192.168.0.1 --override-port 9001
# The platform detected by getmac can be overridden via '--override-platform'.# This is useful when debugging issues or if you know a method# for a different platform works on the current platform.# Any values returned by platform.system() are valid.
getmac -i eth0 --override-platform linux
getmac --ip 192.168.0.1 --override-platform windows
# Force a specific method to be used, regardless of the consequences or if it even works
getmac -v -dddd --ip 192.168.0.1 --force-method ctypeshost
Function: get_mac_address()
interface: Name of a network interface on the system
ip: IPv4 address of a remote host
ip6: IPv6 address of a remote host
hostname: Hostname of a remote host
network_request: If an network request should be made to update and populate the ARP/NDP table of remote hosts used to lookup MACs in most circumstances. Disable this if you want to just use what’s already in the table, or if you have requirements to prevent network traffic. The network request is a empty UDP packet sent to a high port, 55555 by default. This can be changed by setting getmac.PORT to the desired integer value. Additionally, on Windows, this will send a UDP packet to 1.1.1.1:53 to attempt to determine the default interface (Note: the IP is CloudFlare’s DNS server).
Configuration
logging.getLogger("getmac"): Runtime messages and errors are recorded to the getmac logger using Python’s logging module. They can be configured by using logging.basicConfig() or adding handlers to the "getmac" logger.
getmac.getmac.DEBUG: integer value that controls debugging output. The higher the value, the more output you get.
getmac.getmac.PORT: UDP port used to populate the ARP/NDP table (see the documentation of the network_request argument in get_mac_address() for details)
getmac.getmac.OVERRIDE_PLATFORM: Override the platform detection with the given value (e.g. "linux", "windows", "freebsd", etc.’). Any values returned by platform.system() are valid.
getmac.getmac.FORCE_METHOD: Force a specific method to be used, e.g. ‘IpNeighborShow’. This will be used regardless of it’s method type or platform compatibility, and Method.test() will NOT be checked! The list of available methods is in getmac.getmac.METHODS.
Features
Pure-Python (no compiled C-extensions required!)
Python 2.7 and 3.4+
Lightweight, with no dependencies and a small package size
Can be dropped into a project as a standalone .py file
Supports most interpreters: CPython, pypy, pypy3, IronPython 2.7, and Jython 2.7
Provides a simple command line tool (when installed as a package)
MIT licensed!
Legacy Python versions
If you are running a old Python (2.6/3.3 and older) or interpreter, then you can install an older version of getmac that supported that version. The wheels are available in the GitHub releases, or from PyPI with a current version of pip and some special arguments.
Python 2.5: get-mac 0.5.0
Python 2.6: getmac 0.6.0
Python 3.2: get-mac 0.3.0
Python 3.3: get-mac 0.3.0
NOTE: these versions do not have many of the performance improvements, platform support, and bug fixes that came with later releases. They generally work, just not as well. However, if you’re using such an old Python, you probably don’t care about all that 🙂
Notes
Python 3.10 and 3.11 should work, but are not automatically tested at the moment due to having to support 2.7
If none of the arguments are selected, the default network interface for the system will be used. If the default network interface cannot be determined, then it will attempt to fallback to typical defaults for the platform (Ethernet on Windows, em0 on BSD, en0 on OSX/Darwin, and eth0 otherwise). If that fails, then it will fallback to lo on POSIX systems.
“Remote hosts” refer to hosts in your local layer 2 network, also commonly referred to as a “broadcast domain”, “LAN”, or “VLAN”. As far as I know, there is not a reliable method to get a MAC address for a remote host external to the LAN. If you know any methods otherwise, please open a GitHub issue or shoot me an email, I’d love to be wrong about this.
The first four arguments are mutually exclusive. network_request does not have any functionality when the interface argument is specified, and can be safely set if using in a script.
The physical transport is assumed to be Ethernet (802.3). Others, such as Wi-Fi (802.11), are currently not tested or considered. I plan to address this in the future, and am definitely open to pull requests or issues related to this, including error reports.
Exceptions will be handled silently and returned as a None. If you run into problems, you can set DEBUG to true and get more information about what’s happening. If you’re still having issues, please create an issue on GitHub and include the output with DEBUG enabled.
Default interfaces: /proc/net/route, route, ip route list
Mac OSX (Darwin)
networksetup
Same commands as Linux
WSL
Windows commands are used for remote hosts
Unix commands are used for interfaces
OpenBSD
Commands: ifconfig, arp
Default interfaces: route
FreeBSD
Commands: ifconfig, arp
Default interfaces: netstat
Android
Commands: ip link
Platforms currently supported
All or almost all features should work on “supported” platforms. While other versions of the same family or distro may work, they are untested and may have bugs or missing features.
Windows
Desktop: 7, 8, 8.1, 10, 11 (thanks @StevenLooman for testing Windows 11!)
Server: TBD
Partially supported (untested): 2000, XP, Vista
Linux distros
CentOS/RHEL 6+
Ubuntu 16.04+ (15.10 and older should work, but are untested)
Fedora (24+)
Mac OSX (Darwin)
The latest two versions probably (TBD)
Android (6+)
Windows Subsystem for Linux (WSL)
FreeBSD (11+)
OpenBSD
Docker
Docker
Add -v /proc/1/net/arp:/host/arp -e ARP_PATH=/host/arp to access arp table of host inside container in bridge network mode.
docker build -f packaging/Dockerfile -t getmac .
docker run -it getmac:latest --help
docker run -it getmac:latest --version
docker run -it getmac:latest -n localhost
docker run --rm -it -v /proc/1/net/arp:/host/arp -e ARP_PATH=/host/arp getmac:latest -n 192.168.0.1
Caveats
Depending on the platform, there could be a performance detriment, due to heavy usage of regular expressions.
Platform test coverage is imperfect. If you’re having issues, then you might be using a platform I haven’t been able to test. Keep calm, open a GitHub issue, and I’d be more than happy to help.
Known Issues
Linux, WSL: Getting the mac of a local interface IP does not currently work (getmac --ip 10.0.0.4 will fail if 10.0.0.4 is the IP address of a local interface). This issue may be present on other POSIX systems as well.
Hostnames for IPv6 devices are not yet supported.
Windows: the “default” (used when no arguments set or specified) of selecting the default route interface only works effectively if network_request is enabled. If not, Ethernet is used as the default.
IPv6 support is good but lags behind IPv4 in some places and isn’t as well-tested across the supported platform set.
Background and history
The Python standard library has a robust set of networking functionality, such as urllib, ipaddress, ftplib, telnetlib, ssl, and more. Imagine my surprise, then, when I discovered there was not a way to get a seemingly simple piece of information: a MAC address. This package was born out of a need to get the MAC address of hosts on the network without needing admin permissions, and a cross-platform way get the addresses of local interfaces.
In Fall 2018 the package name changed to getmac from get-mac. This affected the package name, the CLI script, and some of the documentation. There were no changes to the core library code. While both package names will updated on PyPI, the use of getmac is preferred.
In Summer 2020, the code was significantly refactored, moving to a class-based structure and significantly improving performance and accuracy. See docs/rewrite.md for details.
Contributing
Contributors are more than welcome! See the contribution guide to get started, and checkout the todo list for a full list of tasks and bugs.
🚀 Modern .NET cryptocurrency trading library – Unified API access to 110 exchange adapters (8 FULL, 3 PARTIAL, 99 SKELETON) with a focus on simplicity and performance.
✨ Key Features
🎯 Unified Interface – Same API across all exchanges
⚡ High Performance – ValueTask<T> async patterns, HTTP client pooling
🔒 Type Safe – Strong typing with comprehensive data models
🌍 Global Coverage – 110 exchange adapters (8 fully validated, 3 in progress)
🚀 Modern .NET cryptocurrency trading library – Unified API access to 110 exchange adapters (8 FULL, 3 PARTIAL, 99 SKELETON) with a focus on simplicity and performance.
✨ Key Features
🎯 Unified Interface – Same API across all exchanges
⚡ High Performance – ValueTask<T> async patterns, HTTP client pooling
🔒 Type Safe – Strong typing with comprehensive data models
🌍 Global Coverage – 110 exchange adapters (8 fully validated, 3 in progress)
a simple helper to add middleware to your zeit/micro or Now 2.0 functions.
This is a set of simple helpers to create and apply middleware to your functions
using either Zeit‘s micro framework
or Zeit Now 2.0 serverless functions.
The entire runtime is less than 100 lines long and depends only on micro itself.
micro-mw operates in similar fashion to that of other JS frameworks (e.g. Express, Hapi).
In this case, when writing request handlers, middleware needs to be applied to the target
function via applyMiddleware().
The most typical usage would looke something like:
const{ applyMiddleware }=require('micro-mw');module.exports=applyMiddleware([middlewareFn1,middlewareFn2],(req,res)=>{// Normal request / response handling logic here});
Sets
Often, the same middleware needs to be applied to most request handlers within an application
or set of serverless functions. For those situations, middleware can be pre-registered as a
logical “set” and then applied to each function via applyMiddleware().
Registering a set is as simple as giving it a name and passing in references to the middleware
functions that need to be called.
const{ applyMiddleware }=require('micro-mw');module.exports=applyMiddleware('my-route-mw',(req,res)=>{// Normal request / response handling logic here});
Default set / middleware
If you want to apply a set of middleware to all routes automatically (unless otherwise specified),
you can define a set of default middleware by using the special keyword default:
Then, when creating a route handler, don’t specify any middleware at all:
const{ applyMiddleware }=require('micro-mw');module.exports=applyMiddleware((req,res)=>{// Normal request / response handling logic here});
Set references
Often, the default middleware is enough for most functions, but occasionally, there
is a need to include other middleware in the request flow. For example, you might
want to include the default authorization middleware on all requests, but only need
database init logic in certain places.
In this case, micro-mw allows references to pre-defined middleware sets anywhere that
a middleware function could be specified.
Here are a couple of different ways this feature could be used:
Whenever micro-mw encounters a string where a middleware function was expected, it
will automatically assume that it is a set reference. Order is important here, as
the referenced set will replace the string pointer in that exact location within
the array.
If a referenced set doesn’t exist, a runtime error will occur and will be processed
by the default error handler.
Error handling
By default, micro-mw will catch all sync and async errors that occur within a
middleware or route handler and return a response to the client.
To override this, simply create a set called errorHandler and pass in one or
more middleware functions that will be triggered in the case that an error is
thrown. Be sure to read creating error middleware
prior to writing your custom error handlers.
If not overridden, the default error handler will look for the following properties
on the error object:
err.statusCode: The numeric HTTP status code to send to the client. (default: 500)
err.body: The content to set as the response body. This could be a string, object, etc.
If a body isn’t provided, the value of err.message will be used instead.
err.headers: An optional JS object containing keys and values that should be added as
HTTP headers in the response.
Additionally, the error handler will output the status code, request method and path, and the
error stack trace to the logs via a call to console.error.
You can turn this off or provide your own logging function if desired. Simply override
the internal error handler like this:
Note: Any custom logging object must provide an error function, as the handler will
call it like: logger.error(msg).
Writing middleware
Writing middleware that is consumable by micro-mw is really no different than writing
a normal request handler. micro-mw uses async/await in order to handle synchronous
and asyncrhonous middleware in the same manner.
A typical middleware function looks like this:
asyncfunctionmyMiddleware(req,res){// Do some action based on the requestletsomeObj=asyncrequestPromise(url,{json: true});req.someObj=someObj;}
Then use it per the patterns mentioned above. For example:
You can, of course, do much more complicated things than this.
Creating error handling middleware
Error handling middleware is almost exactly like “normal” middleware, but make
note of a few key differences:
The thrown error will be passed into the middleware function as a third param,
i.e. (req, res, err) => { ... }
The error handler is responsible for sending a response to the client.
Error handlers should typically avoid throwing errors themselves, as that will
likely result in no response being sent to the client.
Halting request execution
There may be occasions during the lifecycle of a request, where middleware needs
to send a response early. An example of this scenario might be when using CORS
and responding to an OPTIONS request from a browser. In this case, the CORS
middleware may send a response prior to all of the middleware or even the request
handler being run.
If the remaining middleware and handlers run, an exception may be thrown when an
attempt to send another response occurs.
The stopRequest function is provided for a middleware function to signal that a
response has been sent and that any remaining steps in the request should be canceled.
Consider the following sample:
const{ applyMiddleware, stopRequest }=require('micro-mw');functionmiddlewareFn1(req,res){// Send a response early
...
stopRequest(req);}module.exports=applyMiddleware([middlewareFn1,middlewareFn2],(req,res)=>{// Normal request / response handling logic here});
In this scenario, middlewareFn1 will run, but by calling stopRequest(req), the
remaining handlers (i.e. middlewareFn2 and the normal request/response handler)
will not be called.
Contributing
Pull requests and stars are always welcome. For bugs and feature requests,
please create an issue
@inproceedings{kushol2023domain,
title={Domain adaptation of MRI scanners as an alternative to MRI harmonization},
author={Kushol, Rafsanjany and Frayne, Richard and Graham, Simon J and Wilman, Alan H and Kalra, Sanjay and Yang, Yee-Hong},
booktitle={MICCAI Workshop on Domain Adaptation and Representation Transfer},
pages={1--11},
year={2023},
organization={Springer}
}
Abstract
Combining large multi-center datasets can enhance statistical power, particularly in the field of neurology, where data can be scarce. However, applying a deep learning model trained on existing neuroimaging data often leads to inconsistent results when tested on new data due to domain shift caused by differences between the training (source domain) and testing (target domain) data. Existing literature offers several solutions based on domain adaptation (DA) techniques, which ignore complex practical scenarios where heterogeneity may exist in the source or target domain. This study proposes a new perspective in solving the domain shift issue for MRI data by identifying and addressing the dominant factor causing heterogeneity in the dataset. We design an unsupervised DA method leveraging the maximum mean discrepancy and correlation alignment loss in order to align domain-invariant features. Instead of regarding the entire dataset as a source or target domain, the dataset is processed based on the dominant factor of data variations, which is the scanner manufacturer. Afterwards, the target domain’s feature space is aligned pairwise with respect to each source domain’s feature map. Experimental results demonstrate significant performance gain for multiple inter- and intra-study neurodegenerative disease classification tasks.
Requirements
PyTorch
nibabel
scipy
scikit-image
Datasets
ADNI1, ADNI2, and AIBL dataset can be downloaded from ADNI (Alzheimer’s Disease Neuroimaging Initiative)
MIRIAD dataset can be downloaded from MIRIAD (Minimal Interval Resonance Imaging in Alzheimer’s Disease)
CALSNIC dataset can be requested from CALSNIC (Canadian ALS Neuroimaging Consortium)
Preprocessing
Skull stripping using Freesurfer v7.3.2
Command mri_synthstrip -i input -o stripped
Details can be found SynthStrip (SynthStrip: Skull-Stripping for Any Brain Image)
One implementation can be found here. After registration, the image dimension will be $182\times218\times182$ and the voxel dimension will be $1\times1\times1$$mm^3$.
Training
Run python train.py to train the network. It will generate dataset_source1_source2_to_target_max_accuracy.pth in Results folder
Testing
Run python test.py. It will load the pre-trained model dataset_source1_source2_to_target_max_accuracy.pth and generate the classification results based on the given target dataset
A simple multi-core scheduler that provides a trait to the application.
This trait, if implemented for an application specific object, can be used to get periodic calls from the scheduler.
The application trait object has to be registered to the scheduler to get the periodic calls.
Task methods of the scheduler trait are optional to implement, if one or more methods is not needed for a particular application object.
Restrictions
To keep things simple, the scheduler has a couple of restrictions:
All task periods must be multiples of the smallest task period.
All tasks run with the same OS priority. Therefore, the tasks won’t interrupt each other.
The order of execution of the tasks is undefined.
The number of application objects that can be registered to the scheduler is compile time constant.
Supported platforms
esp-idf-hal: ESP32 with IDF.
Cargo.toml
[dependencies]
timeslice = { version = "0.4", features = [ "hal-espidf", "meas" ] }
Example code
A simple usage example can look like this:
// Here we define the scheduler, its tasks and behavior.
timeslice::define_timeslice_sched! {
name: sched_main,
num_objs:1,
tasks:{{ name: task_10ms, period:10 ms, cpu:0, stack:16 kiB },{ name: task_50ms, period:50 ms, cpu:0, stack:3 kiB },{ name: task_100ms, period:100 ms, cpu:1, stack:16 kiB },}}// This structure belongs to your application. It contains application state.structMyThing{// ...}implMyThing{fnnew() -> Self{Self{// ...}}}// Implement the scheduler's tasks for your application.impl sched_main::OpsforBox<MyThing>{fntask_10ms(&self){// Called every 10 ms.// ... Put your code here ...}fntask_50ms(&self){// Called every 50 ms.// ... Put your code here ...}fntask_100ms(&self){// Called every 100 ms.// ... Put your code here ...}}fnmain(){// Initialize the application.use std::sync::Arc;let thing = Arc::new(Box::new(MyThing::new()));// Initialize the scheduler and register your application.let obj = Arc::clone(&thing);
sched_main::init([obj]);// ...}
See the documentation for more complex examples.
Backend selection
One backend has to be selected via feature flags. The following backends are available:
hal-espidf: Use esp-idf-hal and esp-idf-svc hal backend. Select this, if you use an ESP microcontroller.
hal-dummy: Backend for testing only. It does nothing. You should never select it.
Only one of the hal backend feature flags can be selected.
esp-idf-hal and esp-idf-svc versions
The hal-espidf backend depends on the following crates:
esp-idf-hal = "0.44"esp-idf-svc = "0.49"
Features
meas: If the meas feature is enabled, then functions for run time measurements will be enabled.
If this feature flag is not given, then the run time measurement functions will be empty dummies.
Internals
ESP-IDF implementation details
On hal-espidf each task runs as a std::thread that is pinned to the specified CPU core. The threads wait for a trigger signal from a periodic ESP timer. On triggering, the trait methods are executed, if the time slice is due.