Blog

  • Films

    Films Application

    Simple React Films Application

    Demo

    To clone

    • You’ll need to have git, node installed in your
      system.

    git clone https://github.com/ArturW1998/Films.git

    Back-end

    Provides

    • express 4.x
    • graphql 15.x
    • mongoose 5.x

    To install

    • Then install the dependencies:

    npm run server-install

    • Copy .env.example as .env in the server directory

    • Replace the database connection string in the .env file

    Run development server

    • nodemon 2.x

    npm run server

    Open the web browser to http://localhost:3005/graphql

    To build the production package

    • pm2 4.x

    npm run server:prod

    Front-end

    Provides

    • react ^17.x
    • graphql 15.x
    • react-apollo 3.x

    To install

    • Then install the dependencies:

    npm run client-install

    Run development server

    • webpack-dev-server 3.x

    npm run client

    Open the web browser to http://localhost:3000/

    To build the production package

    • webpack 4.x
    • babel 7.x

    npm run client:build

    Unit Testing

    • jest 26.x
    • @testing-library/react 11.x

    To run tests using Jest and React Testing Library:

    npm test in the client directory

    Back-end & Front-end

    To install

    • Then install the dependencies:

    npm run install

    • Copy .env.example as .env in the server directory

    • Replace the database connection string in the .env file

    Run development server

    npm run dev

    Open the web browser to http://localhost:3000/

    To build the production package

    • Replace the NODE_ENV value with the production value in the .env file in the server directory

    • Then run the build:

    npm run build

    Code Quality

    • eslint 7.x
    • stylelint 12.x
    • prettier 1.x

    To do the actual linting and formatting, run:

    npm run lint / npm run lint:styles / npm run format / npm run format:styles

    Contribute

    Please contribute to the project if you know how to make it better, including this README 🙂

    Visit original content creator repository
    https://github.com/arthurvoloshyn/Films

  • multi-label-zero-shot

    Visit original content creator repository
    https://github.com/bionlproc/multi-label-zero-shot

  • getmac

    Latest version on PyPI Coverage Status GitHub Actions Pipeline Status Weekly PyPI downloads PyPI downloads PyPI downloadss of the old name Code style: black

    Pure-Python package to get the MAC address of network interfaces and hosts on the local network.

    It provides a platform-independent interface to get the MAC addresses of:

    • System network interfaces (by interface name)
    • Remote hosts on the local network (by IPv4/IPv6 address or hostname)

    It provides one function: get_mac_address()

    asciicast

    asciicast

    Should you use this package?

    If you only need the addresses of network interfaces, have a limited set of platforms to support, and are able to handle C-extension modules, then you should instead check out the excellent netifaces package by Alastair Houghton (@al45tair). It’s significantly faster (thanks to it being C-code) and has been around a long time and seen widespread usage. However, unfortunately it is no longer maintained as of May 2021, so it may not be a great choice for new projects. Another great option that fits these requirements is the well-known and battle-hardened psutil package by Giampaolo Rodola.

    If the only system you need to run on is Linux, you can run as root, and C-extensions modules are fine, then you should instead check out the arpreq package by Sebastian Schrader. In some cases it can be significantly faster.

    If you want to use psutil, scapy, or netifaces, I have examples of how to do so in a GitHub Gist.

    Installation

    Stable release from PyPI

    pip install getmac

    Latest development version

    pip install https://github.com/ghostofgoes/getmac/archive/main.tar.gz

    Python examples

    from getmac import get_mac_address
    eth_mac = get_mac_address(interface="eth0")
    win_mac = get_mac_address(interface="Ethernet 3")
    ip_mac = get_mac_address(ip="192.168.0.1")
    ip6_mac = get_mac_address(ip6="::1")
    host_mac = get_mac_address(hostname="localhost")
    updated_mac = get_mac_address(ip="10.0.0.1", network_request=True)
    
    # Enable debugging
    from getmac import getmac
    getmac.DEBUG = 2  # DEBUG level 2
    print(getmac.get_mac_address(interface="Ethernet 3"))
    
    # Change the UDP port used for updating the ARP table (UDP packet)
    from getmac import getmac
    getmac.PORT = 44444  # Default is 55555
    print(getmac.get_mac_address(ip="192.168.0.1", network_request=True))

    Terminal examples

    Python 2 users: use getmac2 or python -m getmac instead of getmac.

    getmac --help
    getmac --version
    
    # Invoking with no arguments will return MAC of the default interface
    getmac
    
    # Usage as a module
    python3 -m getmac
    
    # Interface names, IPv4/IPv6 addresses, or Hostnames can be specified
    getmac --interface ens33
    getmac --ip 192.168.0.1
    getmac --ip6 ::1
    getmac --hostname home.router
    
    # Running as a Python module with shorthands for the arguments
    python -m getmac -i 'Ethernet 4'
    python -m getmac -4 192.168.0.1
    python -m getmac -6 ::1
    python -m getmac -n home.router
    
    # Getting the MAC address of a remote host requires the ARP table to be populated.
    # By default, getmac will populate the table by sending a UDP packet to a high port on the host (defaults to 55555).
    # This can be disabled with --no-network-request, as shown here:
    getmac --no-network-request --ip 192.168.0.1
    python -m getmac --no-network-request -n home.router
    
    # Enable output messages
    getmac --verbose
    
    # Debug levels can be specified with '-d'
    getmac -v --debug
    python -m getmac -v -d -i enp11s4
    python -m getmac -v -dd -n home.router
    
    # Change the UDP port used for populating the ARP table when getting the MAC of a remote host
    getmac --ip 192.168.0.1 --override-port 9001
    
    # The platform detected by getmac can be overridden via '--override-platform'.
    # This is useful when debugging issues or if you know a method
    # for a different platform works on the current platform.
    # Any values returned by platform.system() are valid.
    getmac -i eth0 --override-platform linux
    getmac --ip 192.168.0.1 --override-platform windows
    
    # Force a specific method to be used, regardless of the consequences or if it even works
    getmac -v -dddd --ip 192.168.0.1 --force-method ctypeshost

    Function: get_mac_address()

    • interface: Name of a network interface on the system
    • ip: IPv4 address of a remote host
    • ip6: IPv6 address of a remote host
    • hostname: Hostname of a remote host
    • network_request: If an network request should be made to update and populate the ARP/NDP table of remote hosts used to lookup MACs in most circumstances. Disable this if you want to just use what’s already in the table, or if you have requirements to prevent network traffic. The network request is a empty UDP packet sent to a high port, 55555 by default. This can be changed by setting getmac.PORT to the desired integer value. Additionally, on Windows, this will send a UDP packet to 1.1.1.1:53 to attempt to determine the default interface (Note: the IP is CloudFlare’s DNS server).

    Configuration

    • logging.getLogger("getmac"): Runtime messages and errors are recorded to the getmac logger using Python’s logging module. They can be configured by using logging.basicConfig() or adding handlers to the "getmac" logger.
    • getmac.getmac.DEBUG: integer value that controls debugging output. The higher the value, the more output you get.
    • getmac.getmac.PORT: UDP port used to populate the ARP/NDP table (see the documentation of the network_request argument in get_mac_address() for details)
    • getmac.getmac.OVERRIDE_PLATFORM: Override the platform detection with the given value (e.g. "linux", "windows", "freebsd", etc.’). Any values returned by platform.system() are valid.
    • getmac.getmac.FORCE_METHOD: Force a specific method to be used, e.g. ‘IpNeighborShow’. This will be used regardless of it’s method type or platform compatibility, and Method.test() will NOT be checked! The list of available methods is in getmac.getmac.METHODS.

    Features

    • Pure-Python (no compiled C-extensions required!)
    • Python 2.7 and 3.4+
    • Lightweight, with no dependencies and a small package size
    • Can be dropped into a project as a standalone .py file
    • Supports most interpreters: CPython, pypy, pypy3, IronPython 2.7, and Jython 2.7
    • Provides a simple command line tool (when installed as a package)
    • MIT licensed!

    Legacy Python versions

    If you are running a old Python (2.6/3.3 and older) or interpreter, then you can install an older version of getmac that supported that version. The wheels are available in the GitHub releases, or from PyPI with a current version of pip and some special arguments.

    • Python 2.5: get-mac 0.5.0
    • Python 2.6: getmac 0.6.0
    • Python 3.2: get-mac 0.3.0
    • Python 3.3: get-mac 0.3.0

    NOTE: these versions do not have many of the performance improvements, platform support, and bug fixes that came with later releases. They generally work, just not as well. However, if you’re using such an old Python, you probably don’t care about all that 🙂

    Notes

    • Python 3.10 and 3.11 should work, but are not automatically tested at the moment due to having to support 2.7
    • If none of the arguments are selected, the default network interface for the system will be used. If the default network interface cannot be determined, then it will attempt to fallback to typical defaults for the platform (Ethernet on Windows, em0 on BSD, en0 on OSX/Darwin, and eth0 otherwise). If that fails, then it will fallback to lo on POSIX systems.
    • “Remote hosts” refer to hosts in your local layer 2 network, also commonly referred to as a “broadcast domain”, “LAN”, or “VLAN”. As far as I know, there is not a reliable method to get a MAC address for a remote host external to the LAN. If you know any methods otherwise, please open a GitHub issue or shoot me an email, I’d love to be wrong about this.
    • The first four arguments are mutually exclusive. network_request does not have any functionality when the interface argument is specified, and can be safely set if using in a script.
    • The physical transport is assumed to be Ethernet (802.3). Others, such as Wi-Fi (802.11), are currently not tested or considered. I plan to address this in the future, and am definitely open to pull requests or issues related to this, including error reports.
    • Exceptions will be handled silently and returned as a None. If you run into problems, you can set DEBUG to true and get more information about what’s happening. If you’re still having issues, please create an issue on GitHub and include the output with DEBUG enabled.

    Commands and techniques by platform

    • Windows
      • Commands: getmac.exe, ipconfig.exe, arp.exe, wmic.exe
      • Libraries: uuid, ctypes, socket
    • Linux/Unix
      • Commands: arp, ip, ifconfig, netstat, ip link, lanscan
      • Libraries: uuid, fcntl, socket
      • Files: /sys/class/net/{iface}/address, /proc/net/arp
      • Default interfaces: /proc/net/route, route, ip route list
    • Mac OSX (Darwin)
      • networksetup
      • Same commands as Linux
    • WSL
      • Windows commands are used for remote hosts
      • Unix commands are used for interfaces
    • OpenBSD
      • Commands: ifconfig, arp
      • Default interfaces: route
    • FreeBSD
      • Commands: ifconfig, arp
      • Default interfaces: netstat
    • Android
      • Commands: ip link

    Platforms currently supported

    All or almost all features should work on “supported” platforms. While other versions of the same family or distro may work, they are untested and may have bugs or missing features.

    • Windows
      • Desktop: 7, 8, 8.1, 10, 11 (thanks @StevenLooman for testing Windows 11!)
      • Server: TBD
      • Partially supported (untested): 2000, XP, Vista
    • Linux distros
      • CentOS/RHEL 6+
      • Ubuntu 16.04+ (15.10 and older should work, but are untested)
      • Fedora (24+)
    • Mac OSX (Darwin)
      • The latest two versions probably (TBD)
    • Android (6+)
    • Windows Subsystem for Linux (WSL)
    • FreeBSD (11+)
    • OpenBSD
    • Docker

    Docker

    Add -v /proc/1/net/arp:/host/arp -e ARP_PATH=/host/arp to access arp table of host inside container in bridge network mode.

    docker build -f packaging/Dockerfile -t getmac .
    docker run -it getmac:latest --help
    docker run -it getmac:latest --version
    docker run -it getmac:latest -n localhost
    docker run --rm -it -v /proc/1/net/arp:/host/arp -e ARP_PATH=/host/arp getmac:latest -n 192.168.0.1

    Caveats

    • Depending on the platform, there could be a performance detriment, due to heavy usage of regular expressions.
    • Platform test coverage is imperfect. If you’re having issues, then you might be using a platform I haven’t been able to test. Keep calm, open a GitHub issue, and I’d be more than happy to help.

    Known Issues

    • Linux, WSL: Getting the mac of a local interface IP does not currently work (getmac --ip 10.0.0.4 will fail if 10.0.0.4 is the IP address of a local interface). This issue may be present on other POSIX systems as well.
    • Hostnames for IPv6 devices are not yet supported.
    • Windows: the “default” (used when no arguments set or specified) of selecting the default route interface only works effectively if network_request is enabled. If not, Ethernet is used as the default.
    • IPv6 support is good but lags behind IPv4 in some places and isn’t as well-tested across the supported platform set.

    Background and history

    The Python standard library has a robust set of networking functionality, such as urllib, ipaddress, ftplib, telnetlib, ssl, and more. Imagine my surprise, then, when I discovered there was not a way to get a seemingly simple piece of information: a MAC address. This package was born out of a need to get the MAC address of hosts on the network without needing admin permissions, and a cross-platform way get the addresses of local interfaces.

    In Fall 2018 the package name changed to getmac from get-mac. This affected the package name, the CLI script, and some of the documentation. There were no changes to the core library code. While both package names will updated on PyPI, the use of getmac is preferred.

    In Summer 2020, the code was significantly refactored, moving to a class-based structure and significantly improving performance and accuracy. See docs/rewrite.md for details.

    Contributing

    Contributors are more than welcome! See the contribution guide to get started, and checkout the todo list for a full list of tasks and bugs.

    Before submitting a PR, please make sure you’ve completed the pull request checklist!

    The Python Discord server is a good place to ask questions or discuss the project (Handle: @KnownError#0001).

    Contributors

    • Christopher Goes (@ghostofgoes) – Author and maintainer
    • Calvin Tran (@cyberhobbes) – Windows interface detection improvements
    • Daniel Flanagan (@FlantasticDan) – Code cleanup
    • @emadmahdi – Android fixes
    • Izra Faturrahman (@Frizz925) – Unit tests using the platform samples
    • Jose Gonzalez (@Komish) – Docker container and Docker testing
    • @fortunate-man – Awesome usage videos
    • @martmists – legacy Python compatibility improvements
    • @hargoniX – scripts and specfiles for RPM packaging
    • Ville Skyttä (@scop) – arping lookup support
    • Tomasz Duda (@tomaszduda23) – support for docker in network bridge mode
    • Steven Looman (@StevenLooman) – Windows 11 testing
    • Reimund Renner (@raymanP) – macOS fixes

    Sources

    Many of the methods used to acquire an address and the core logic framework are attributed to the CPython project’s UUID implementation.

    Other notable sources

    License

    MIT. Feel free to copy, modify, and use to your heart’s content. Enjoy 🙂

    Visit original content creator repository https://github.com/GhostofGoes/getmac
  • splatoon-notion

    splatoon-notion

    Demo

    Splatoon2 のバトルやサーモンランのデータを自動で Notion に保存する Google App Script です。

    DEMO

    https://yutakobayashi.notion.site/Splatoon2-d66a5ae5905f4fc8b14636e138c4cc87

    Usage

    このスクリプトではイカリング 2の API を利用しています。この API の利用は推奨されていないため利用は自己責任でお願いします。

    iksm_session

    API へのアクセスを行うため iksm_session と呼ばれるトークンを取得します。

    このトークンはプロキシツールでNintendo Switch Onlineの通信内容を表示したりiksm-chanと呼ばれるウェブアプリを利用することで取得できます。

    ページの複製とデータベース id の取得

    以下のページを Notion 上で複製しバトル、サーモンラン、ルール、ステージ、ゲームモード、ブキ、サブウェポン、スペシャル、ギア、ブランドの各データベースの id を取得します。

    https://yutakobayashi.notion.site/Splatoon2-cc3b04c2ac22476fb5e189275e261247

    プロパティの設定

    Google Apps Script のクラシックエディターなどでプロパティを以下のように設定します。

    • "iksm_session": your iksm_session token
    • "notionToken": your Notion integration token
    • "battle_database": your Notion Battle Database id
    • "salmon_run_database": your Notion Salmon Run Database id
    • "mode_database": your Notion Game Mode Database id
    • "rule_database": your Notion Rule Database id
    • "stage_database": your Notion Stage Database id
    • "weapon_database": your Notion Weapon Database id
    • "subweapon_database": your Notion Sub Weapon Database id
    • "specialweapon_database": your Notion Special Weapon Database id
    • "gear_database": your Notion Gear Database id
    • "brand_database": your Notion Brand Database id
    • "discordWebHookURL": your Discord Webhook URL

    定期実行

    main 関数を定期実行することで自動で Notion にバトル履歴、サーモラン、使用したブキやギア、プレイしたステージなどのデータが自動的に作成されます。イカリング 2 の API の仕様上過去 50 件のバトルデータのみしか取得できないためデータが消えてしまう前に実行するといいでしょう。

    TODO

    • ffmpeg と numpy と YouTube Data API の連携
    Visit original content creator repository https://github.com/yutakobayashidev/splatoon-notion
  • splatoon-notion

    splatoon-notion

    Demo

    Splatoon2 のバトルやサーモンランのデータを自動で Notion に保存する Google App Script です。

    DEMO

    https://yutakobayashi.notion.site/Splatoon2-d66a5ae5905f4fc8b14636e138c4cc87

    Usage

    このスクリプトではイカリング 2の API を利用しています。この API の利用は推奨されていないため利用は自己責任でお願いします。

    iksm_session

    API へのアクセスを行うため iksm_session と呼ばれるトークンを取得します。

    このトークンはプロキシツールでNintendo Switch Onlineの通信内容を表示したりiksm-chanと呼ばれるウェブアプリを利用することで取得できます。

    ページの複製とデータベース id の取得

    以下のページを Notion 上で複製しバトル、サーモンラン、ルール、ステージ、ゲームモード、ブキ、サブウェポン、スペシャル、ギア、ブランドの各データベースの id を取得します。

    https://yutakobayashi.notion.site/Splatoon2-cc3b04c2ac22476fb5e189275e261247

    プロパティの設定

    Google Apps Script のクラシックエディターなどでプロパティを以下のように設定します。

    • "iksm_session": your iksm_session token
    • "notionToken": your Notion integration token
    • "battle_database": your Notion Battle Database id
    • "salmon_run_database": your Notion Salmon Run Database id
    • "mode_database": your Notion Game Mode Database id
    • "rule_database": your Notion Rule Database id
    • "stage_database": your Notion Stage Database id
    • "weapon_database": your Notion Weapon Database id
    • "subweapon_database": your Notion Sub Weapon Database id
    • "specialweapon_database": your Notion Special Weapon Database id
    • "gear_database": your Notion Gear Database id
    • "brand_database": your Notion Brand Database id
    • "discordWebHookURL": your Discord Webhook URL

    定期実行

    main 関数を定期実行することで自動で Notion にバトル履歴、サーモラン、使用したブキやギア、プレイしたステージなどのデータが自動的に作成されます。イカリング 2 の API の仕様上過去 50 件のバトルデータのみしか取得できないためデータが消えてしまう前に実行するといいでしょう。

    TODO

    • ffmpeg と numpy と YouTube Data API の連携
    Visit original content creator repository https://github.com/yutakobayashidev/splatoon-notion
  • ccxt.simple

    CCXT.Simple

    NuGet .NET License Downloads

    🚀 Modern .NET cryptocurrency trading library – Unified API access to 110 exchange adapters (8 FULL, 3 PARTIAL, 99 SKELETON) with a focus on simplicity and performance.

    ✨ Key Features

    • 🎯 Unified Interface – Same API across all exchanges
    • ⚡ High PerformanceValueTask<T> async patterns, HTTP client pooling
    • 🔒 Type Safe – Strong typing with comprehensive data models
    • 🌍 Global Coverage – 110 exchange adapters (8 fully validated, 3 in progress)
    • 📊 Complete API – Market data, trading, account management, funding operations

    🚀 Quick Start

    Installation

    dotnet add package CCXT.Simple

    Basic Usage

    using CCXT.Simple.Exchanges.Binance;
    
    // Initialize exchange
    var exchange = new Exchange("USD");
    var binance = new XBinance(exchange, "api_key", "secret_key");
    
    // Get market data
    var btcPrice = await binance.GetPrice("BTCUSDT");
    var orderbook = await binance.GetOrderbook("BTCUSDT", 10);
    
    // Trading operations
    var balances = await binance.GetBalance();
    var order = await binance.PlaceOrder("BTCUSDT", SideType.Buy, "limit", 0.001m, 50000m);

    🏢 Exchange Support

    ✅ Fully Functional (8 exchanges)

    OKX | Bitstamp | Coinbase | Coinone | Bithumb | Kraken | Binance | Upbit

    🌓 Partial / In Progress (3)

    Huobi | KuCoin | Korbit

    🚧 Priority Development Queue

    BitfinexGeminiPoloniexMexcDeribitBitmex

    📋 Skeleton Ready (99 exchanges)

    Remaining adapters are generated skeletons awaiting implementation.

    📖 View complete exchange list and status →

    💡 Architecture

    Built on a multi-exchange adapter pattern with a unified IExchange interface:

    public interface IExchange
    {
        // Market Data
        ValueTask<Orderbook> GetOrderbook(string symbol, int limit = 5);
        ValueTask<decimal> GetPrice(string symbol);
        
        // Trading
        ValueTask<OrderInfo> PlaceOrder(string symbol, SideType side, string orderType, decimal amount, decimal? price = null);
        ValueTask<Dictionary<string, BalanceInfo>> GetBalance();
        
        // Funding
        ValueTask<DepositAddress> GetDepositAddress(string currency, string network = null);
        ValueTask<WithdrawalInfo> Withdraw(string currency, decimal amount, string address);
    }

    🔧 Configuration

    // Basic setup
    var exchange = new Exchange("USD");  // or "KRW", "EUR", etc.
    exchange.ApiCallDelaySeconds = 1;    // Rate limiting
    exchange.Volume24hBase = 1000000;    // Volume thresholds
    
    // With events
    exchange.OnMessageEvent += (ex, msg, code) => Console.WriteLine($"[{ex}] {msg}");
    exchange.OnUsdPriceEvent += price => Console.WriteLine($"BTC: ${price}");

    📚 Documentation & Examples

    Running Examples

    git clone https://github.com/odinsoft-lab/ccxt.simple.git
    cd ccxt.simple
    dotnet run --project samples/ccxt.sample.csproj

    🤝 Contributing

    We welcome contributions! Need a specific exchange implemented? Create an issue – exchanges with more community requests get priority.

    Development Setup

    git clone https://github.com/odinsoft-lab/ccxt.simple.git
    cd ccxt.simple
    dotnet build              # Build solution  
    dotnet test               # Run 73 tests

    📊 Project Status

    • Current Version: 1.1.10 (.NET 8.0 & 9.0 & 10.0)
    • Architecture: Thread-safe, event-driven, REST API focused
    • Test Coverage: 73 tests passing
    • Active Development: Monthly updates, community-driven priorities

    👥 Team

    Core Development Team

    📞 Support & Contact

    📄 License

    MIT License – see LICENSE.md for details.


    Built with ❤️ by the ODINSOFT Team | ⭐ Star us on GitHub

    Visit original content creator repository https://github.com/ccxt-net/ccxt.simple
  • ccxt.simple

    CCXT.Simple

    NuGet .NET License Downloads

    🚀 Modern .NET cryptocurrency trading library – Unified API access to 110 exchange adapters (8 FULL, 3 PARTIAL, 99 SKELETON) with a focus on simplicity and performance.

    ✨ Key Features

    • 🎯 Unified Interface – Same API across all exchanges
    • ⚡ High PerformanceValueTask<T> async patterns, HTTP client pooling
    • 🔒 Type Safe – Strong typing with comprehensive data models
    • 🌍 Global Coverage – 110 exchange adapters (8 fully validated, 3 in progress)
    • 📊 Complete API – Market data, trading, account management, funding operations

    🚀 Quick Start

    Installation

    dotnet add package CCXT.Simple

    Basic Usage

    using CCXT.Simple.Exchanges.Binance;
    
    // Initialize exchange
    var exchange = new Exchange("USD");
    var binance = new XBinance(exchange, "api_key", "secret_key");
    
    // Get market data
    var btcPrice = await binance.GetPrice("BTCUSDT");
    var orderbook = await binance.GetOrderbook("BTCUSDT", 10);
    
    // Trading operations
    var balances = await binance.GetBalance();
    var order = await binance.PlaceOrder("BTCUSDT", SideType.Buy, "limit", 0.001m, 50000m);

    🏢 Exchange Support

    ✅ Fully Functional (8 exchanges)

    OKX | Bitstamp | Coinbase | Coinone | Bithumb | Kraken | Binance | Upbit

    🌓 Partial / In Progress (3)

    Huobi | KuCoin | Korbit

    🚧 Priority Development Queue

    BitfinexGeminiPoloniexMexcDeribitBitmex

    📋 Skeleton Ready (99 exchanges)

    Remaining adapters are generated skeletons awaiting implementation.

    📖 View complete exchange list and status →

    💡 Architecture

    Built on a multi-exchange adapter pattern with a unified IExchange interface:

    public interface IExchange
    {
        // Market Data
        ValueTask<Orderbook> GetOrderbook(string symbol, int limit = 5);
        ValueTask<decimal> GetPrice(string symbol);
        
        // Trading
        ValueTask<OrderInfo> PlaceOrder(string symbol, SideType side, string orderType, decimal amount, decimal? price = null);
        ValueTask<Dictionary<string, BalanceInfo>> GetBalance();
        
        // Funding
        ValueTask<DepositAddress> GetDepositAddress(string currency, string network = null);
        ValueTask<WithdrawalInfo> Withdraw(string currency, decimal amount, string address);
    }

    🔧 Configuration

    // Basic setup
    var exchange = new Exchange("USD");  // or "KRW", "EUR", etc.
    exchange.ApiCallDelaySeconds = 1;    // Rate limiting
    exchange.Volume24hBase = 1000000;    // Volume thresholds
    
    // With events
    exchange.OnMessageEvent += (ex, msg, code) => Console.WriteLine($"[{ex}] {msg}");
    exchange.OnUsdPriceEvent += price => Console.WriteLine($"BTC: ${price}");

    📚 Documentation & Examples

    Running Examples

    git clone https://github.com/odinsoft-lab/ccxt.simple.git
    cd ccxt.simple
    dotnet run --project samples/ccxt.sample.csproj

    🤝 Contributing

    We welcome contributions! Need a specific exchange implemented? Create an issue – exchanges with more community requests get priority.

    Development Setup

    git clone https://github.com/odinsoft-lab/ccxt.simple.git
    cd ccxt.simple
    dotnet build              # Build solution  
    dotnet test               # Run 73 tests

    📊 Project Status

    • Current Version: 1.1.10 (.NET 8.0 & 9.0 & 10.0)
    • Architecture: Thread-safe, event-driven, REST API focused
    • Test Coverage: 73 tests passing
    • Active Development: Monthly updates, community-driven priorities

    👥 Team

    Core Development Team

    📞 Support & Contact

    📄 License

    MIT License – see LICENSE.md for details.


    Built with ❤️ by the ODINSOFT Team | ⭐ Star us on GitHub

    Visit original content creator repository https://github.com/ccxt-net/ccxt.simple
  • micro-mw

    Build Status

    micro-mw

    a simple helper to add middleware to your zeit/micro or Now 2.0 functions.

    This is a set of simple helpers to create and apply middleware to your functions using either Zeit‘s micro framework or Zeit Now 2.0 serverless functions.

    The entire runtime is less than 100 lines long and depends only on micro itself.

    Install

    Install with npm

    $ npm i micro-mw

    micro-mw requires Node.js v8.0.0 or higher.

    Concepts and usage

    micro-mw operates in similar fashion to that of other JS frameworks (e.g. Express, Hapi). In this case, when writing request handlers, middleware needs to be applied to the target function via applyMiddleware().

    The most typical usage would looke something like:

    const { applyMiddleware } = require('micro-mw');
    
    module.exports = applyMiddleware([ middlewareFn1, middlewareFn2 ], (req, res) => {
      // Normal request / response handling logic here
    });

    Sets

    Often, the same middleware needs to be applied to most request handlers within an application or set of serverless functions. For those situations, middleware can be pre-registered as a logical “set” and then applied to each function via applyMiddleware().

    Registering a set is as simple as giving it a name and passing in references to the middleware functions that need to be called.

    const { createSet } = require('micro-mw');
    
    createSet('my-route-mw', [ middlewareFn1, middlewareFn2 ]);

    Then just use the middleware like this:

    const { applyMiddleware } = require('micro-mw');
    
    module.exports = applyMiddleware('my-route-mw', (req, res) => {
      // Normal request / response handling logic here
    });

    Default set / middleware

    If you want to apply a set of middleware to all routes automatically (unless otherwise specified), you can define a set of default middleware by using the special keyword default:

    const { createSet } = require('micro-mw');
    
    createSet('default', [ middlewareFn1, middlewareFn2 ]);

    Then, when creating a route handler, don’t specify any middleware at all:

    const { applyMiddleware } = require('micro-mw');
    
    module.exports = applyMiddleware((req, res) => {
      // Normal request / response handling logic here
    });

    Set references

    Often, the default middleware is enough for most functions, but occasionally, there is a need to include other middleware in the request flow. For example, you might want to include the default authorization middleware on all requests, but only need database init logic in certain places.

    In this case, micro-mw allows references to pre-defined middleware sets anywhere that a middleware function could be specified.

    Here are a couple of different ways this feature could be used:

    • Reference one set from another

      const { createSet } = require('micro-mw');
      
      createSet('auth', [ authUserMw, getProfileMw, checkScopesMw ]);
      createSet('db', [ initDbMw ]);
      
      createSet('default', [ 'auth', 'db' ]);
      
    • Chain sets together

      const { applyMiddleware } = require('micro-mw');
      
      module.exports = applyMiddleware([ 'db', 'auth', myCustomMwFn ], (req, res) => {
        // Normal route logic
      });
      

    Whenever micro-mw encounters a string where a middleware function was expected, it will automatically assume that it is a set reference. Order is important here, as the referenced set will replace the string pointer in that exact location within the array.

    If a referenced set doesn’t exist, a runtime error will occur and will be processed by the default error handler.

    Error handling

    By default, micro-mw will catch all sync and async errors that occur within a middleware or route handler and return a response to the client.

    To override this, simply create a set called errorHandler and pass in one or more middleware functions that will be triggered in the case that an error is thrown. Be sure to read creating error middleware prior to writing your custom error handlers.

    If not overridden, the default error handler will look for the following properties on the error object:

    • err.statusCode: The numeric HTTP status code to send to the client. (default: 500)
    • err.body: The content to set as the response body. This could be a string, object, etc. If a body isn’t provided, the value of err.message will be used instead.
    • err.headers: An optional JS object containing keys and values that should be added as HTTP headers in the response.

    Additionally, the error handler will output the status code, request method and path, and the error stack trace to the logs via a call to console.error.

    You can turn this off or provide your own logging function if desired. Simply override the internal error handler like this:

    const { createSet, errorHandler } = require('micro-mw');
    
    createSet('errorHandler', [ errorHandler({ logErrors: true, logger: myLoggerObj }) ]);

    Note: Any custom logging object must provide an error function, as the handler will call it like: logger.error(msg).

    Writing middleware

    Writing middleware that is consumable by micro-mw is really no different than writing a normal request handler. micro-mw uses async/await in order to handle synchronous and asyncrhonous middleware in the same manner.

    A typical middleware function looks like this:

    async function myMiddleware(req, res) {
      // Do some action based on the request
      let someObj = async requestPromise(url, { json: true });
      req.someObj = someObj;
    }

    Then use it per the patterns mentioned above. For example:

    const { applyMiddleware } = require('micro-mw');
    
    module.exports = applyMiddleware([ myMiddleware ], (req, res) {
      // Typical request handling
    });

    That’s it!

    You can, of course, do much more complicated things than this.

    Creating error handling middleware

    Error handling middleware is almost exactly like “normal” middleware, but make note of a few key differences:

    • The thrown error will be passed into the middleware function as a third param, i.e. (req, res, err) => { ... }

    • The error handler is responsible for sending a response to the client.

    • Error handlers should typically avoid throwing errors themselves, as that will likely result in no response being sent to the client.

    Halting request execution

    There may be occasions during the lifecycle of a request, where middleware needs to send a response early. An example of this scenario might be when using CORS and responding to an OPTIONS request from a browser. In this case, the CORS middleware may send a response prior to all of the middleware or even the request handler being run.

    If the remaining middleware and handlers run, an exception may be thrown when an attempt to send another response occurs.

    The stopRequest function is provided for a middleware function to signal that a response has been sent and that any remaining steps in the request should be canceled.

    Consider the following sample:

    const { applyMiddleware, stopRequest } = require('micro-mw');
    
    function middlewareFn1(req, res) {
      // Send a response early
      ...
      stopRequest(req);
    }
    
    module.exports = applyMiddleware([ middlewareFn1, middlewareFn2 ], (req, res) => {
      // Normal request / response handling logic here
    });

    In this scenario, middlewareFn1 will run, but by calling stopRequest(req), the remaining handlers (i.e. middlewareFn2 and the normal request/response handler) will not be called.

    Contributing

    Pull requests and stars are always welcome. For bugs and feature requests, please create an issue

    Author

    Matt Hamann

    Thanks also to Mathias Karstädt for some inspiration in his work on micro-middleware.

    Visit original content creator repository https://github.com/mhamann/micro-mw
  • DAMS

    DAMS

    Domain Adaptation of MRI Scanners as an alternative to MRI harmonization

    The paper has been accepted for presentation at the 5th MICCAI Workshop on Domain Adaptation and Representation Transfer (DART). Paper link

    Download the slide of the presentation DART_Rafsanjany_Kushol.pdf

    @inproceedings{kushol2023domain,
      title={Domain adaptation of MRI scanners as an alternative to MRI harmonization},
      author={Kushol, Rafsanjany and Frayne, Richard and Graham, Simon J and Wilman, Alan H and Kalra, Sanjay and Yang, Yee-Hong},
      booktitle={MICCAI Workshop on Domain Adaptation and Representation Transfer},
      pages={1--11},
      year={2023},
      organization={Springer}
    }
    

    Abstract

    Combining large multi-center datasets can enhance statistical power, particularly in the field of neurology, where data can be scarce. However, applying a deep learning model trained on existing neuroimaging data often leads to inconsistent results when tested on new data due to domain shift caused by differences between the training (source domain) and testing (target domain) data. Existing literature offers several solutions based on domain adaptation (DA) techniques, which ignore complex practical scenarios where heterogeneity may exist in the source or target domain. This study proposes a new perspective in solving the domain shift issue for MRI data by identifying and addressing the dominant factor causing heterogeneity in the dataset. We design an unsupervised DA method leveraging the maximum mean discrepancy and correlation alignment loss in order to align domain-invariant features. Instead of regarding the entire dataset as a source or target domain, the dataset is processed based on the dominant factor of data variations, which is the scanner manufacturer. Afterwards, the target domain’s feature space is aligned pairwise with respect to each source domain’s feature map. Experimental results demonstrate significant performance gain for multiple inter- and intra-study neurodegenerative disease classification tasks.

    Proposed_architecture

    Requirements

    PyTorch
    nibabel
    scipy
    scikit-image

    Datasets

    ADNI1, ADNI2, and AIBL dataset can be downloaded from ADNI (Alzheimer’s Disease Neuroimaging Initiative)

    MIRIAD dataset can be downloaded from MIRIAD (Minimal Interval Resonance Imaging in Alzheimer’s Disease)

    CALSNIC dataset can be requested from CALSNIC (Canadian ALS Neuroimaging Consortium)

    Preprocessing

    Skull stripping using Freesurfer v7.3.2

    Command mri_synthstrip -i input -o stripped

    Details can be found SynthStrip (SynthStrip: Skull-Stripping for Any Brain Image)

    Registration to MNI-152 using FSL FLIRT function

    Details can be found FSL

    One implementation can be found here. After registration, the image dimension will be $182\times218\times182$ and the voxel dimension will be $1\times1\times1$ $mm^3$.

    Training

    Run python train.py to train the network. It will generate dataset_source1_source2_to_target_max_accuracy.pth in Results folder

    Testing

    Run python test.py. It will load the pre-trained model dataset_source1_source2_to_target_max_accuracy.pth and generate the classification results based on the given target dataset

    Contact

    Email at: kushol@ualberta.ca

    Acknowledgement

    This basic structure of the code relies on the project of Deep Transfer Learning in PyTorch

    Aligning Domain-specific Distribution and Classifier for Cross-domain Classification from Multiple Sources

    Deep CORAL: Correlation Alignment for Deep Domain Adaptation

    Visit original content creator repository https://github.com/rkushol/DAMS
  • timeslicers

    Simple periodic time slice scheduler

    Project home

    Git repository

    Github repository

    A simple multi-core scheduler that provides a trait to the application.
    This trait, if implemented for an application specific object, can be used to get periodic calls from the scheduler.
    The application trait object has to be registered to the scheduler to get the periodic calls.

    Task methods of the scheduler trait are optional to implement, if one or more methods is not needed for a particular application object.

    Restrictions

    To keep things simple, the scheduler has a couple of restrictions:

    • All task periods must be multiples of the smallest task period.
    • All tasks run with the same OS priority. Therefore, the tasks won’t interrupt each other.
    • The order of execution of the tasks is undefined.
    • The number of application objects that can be registered to the scheduler is compile time constant.

    Supported platforms

    • esp-idf-hal: ESP32 with IDF.

    Cargo.toml

    [dependencies]
    timeslice = { version = "0.4", features = [ "hal-espidf", "meas" ] }

    Example code

    A simple usage example can look like this:

    // Here we define the scheduler, its tasks and behavior.
    timeslice::define_timeslice_sched! {
        name: sched_main,
        num_objs: 1,
        tasks: {
            { name: task_10ms, period: 10 ms, cpu: 0, stack: 16 kiB },
            { name: task_50ms, period: 50 ms, cpu: 0, stack: 3 kiB },
            { name: task_100ms, period: 100 ms, cpu: 1, stack: 16 kiB },
        }
    }
    
    // This structure belongs to your application. It contains application state.
    struct MyThing {
        // ...
    }
    
    impl MyThing {
        fn new() -> Self {
            Self {
                // ...
            }
        }
    }
    
    // Implement the scheduler's tasks for your application.
    impl sched_main::Ops for Box<MyThing> {
        fn task_10ms(&self) {
            // Called every 10 ms.
            // ... Put your code here ...
        }
    
        fn task_50ms(&self) {
            // Called every 50 ms.
            // ... Put your code here ...
        }
    
        fn task_100ms(&self) {
            // Called every 100 ms.
            // ... Put your code here ...
        }
    }
    
    fn main() {
        // Initialize the application.
        use std::sync::Arc;
        let thing = Arc::new(Box::new(MyThing::new()));
    
        // Initialize the scheduler and register your application.
        let obj = Arc::clone(&thing);
        sched_main::init([obj]);
    
        // ...
    }

    See the documentation for more complex examples.

    Backend selection

    One backend has to be selected via feature flags. The following backends are available:

    • hal-espidf: Use esp-idf-hal and esp-idf-svc hal backend. Select this, if you use an ESP microcontroller.
    • hal-dummy: Backend for testing only. It does nothing. You should never select it.

    Only one of the hal backend feature flags can be selected.

    esp-idf-hal and esp-idf-svc versions

    The hal-espidf backend depends on the following crates:

    esp-idf-hal = "0.44"
    esp-idf-svc = "0.49"

    Features

    • meas: If the meas feature is enabled, then functions for run time measurements will be enabled.
      If this feature flag is not given, then the run time measurement functions will be empty dummies.

    Internals

    ESP-IDF implementation details

    On hal-espidf each task runs as a std::thread that is pinned to the specified CPU core. The threads wait for a trigger signal from a periodic ESP timer. On triggering, the trait methods are executed, if the time slice is due.

    Memory safety

    This crate does not use unsafe code.

    License

    Copyright 2023-2024 Michael Büsch m@bues.ch

    Licensed under the Apache License version 2.0 or the MIT license, at your option.

    SPDX-License-Identifier: Apache-2.0 OR MIT

    Visit original content creator repository
    https://github.com/mbuesch/timeslicers