Telegram’ın ses formatına destek vermediği hiç sesi çıkmayan videoları FFmpeg aracılığıyla uygun formata kodlar ve bunu herhangi bir kalite kaybı olmadan yapar.
Ortam Değişkenleri:
Ortam değişkenlerini ayarlayın ve bunları config.env içine ekleyin.
This repository is no longer maitained. We moved all our Dashboard applications to a brand new dashboard-apps monorepo. This allows you to run them all locally effortlessly.
App SKUs
Commerce Layer application for managing Skus.
Any Commerce Layer account comes with a hosted version of this application, as part of the Dashboard hub, and it is automatically enabled for admin users.
An admin can then enable the app for other organization members giving each member full or read-only access.
It’s possible to fork this app and add it to your Dashboard hub, in order to customize every part of the code and start using your own and self-hosted version.
Set your environment by creating a new /src/app/.env.local file starting from /src/app/.env (not required for local development).
Install dependencies and run the development server:
pnpm install
pnpm dev
The app will run in development mode at http://localhost:5173/.
In order to authenticate the app, you need to add an integration access token as URL query param. Example: http://localhost:5173/?accessToken=<integration-token-for-local-dev>.
That access token is only required (and will work only) for development mode. In production mode the Commerce Layer Dashboard hub will generate a valid access token, based on the current user.
Modify the app to satisfy your requirements.
All our Dashboard apps are built using a shared component library @commercelayer/app-elements.
You can browse the official documentation to discover more about this topic.
Deploy the forked repository to your preferred hosting service. You can deploy with one click below:
Complete the configuration in the Dashboard hub by setting your app URL.
A decently performant algorithmic trading system for crypto exchanges, built from the ground up as a personal project to help me and my friend explore fintech and learn Rust. This project is a demonstration version of a private endeavour and has had content redacted.
The implementation of insightful analysis and profitable strategies is left as an exercise to the reader.
Quickstart (for Binance testnet)
Install Rust and Cargo
Make Binance testnet or live account
Make a real environment file from either shell/batch sample and exclude it in .gitignore
Populate env file with API keys
Run either .env.bat or source .env.sh
Run cargo run --release
Further info can be found in directory READMEs.
You may need to delete rust-toolchain.toml and manually set rust to nightly in CLI depending on your environment.
DISCLAIMER
I have no professional experience in algotrading, and am entirely self taught. This project is incomplete, lacks comprehensive testing, and contains unresolved bugs. I offer no assurance that the tools and systems I have developed are up to any known standard or in line with any obvious convenience.
https://en.bomou.com/avatars/api?token={your token}
&model_id={ your custom AI avatar id}
&template_id={your custom AI avatar template/style id }
&speaker_id={ the accent model id}
&speaker_style={assistant,chat,customerservice,newscast,affectionate,angry,calm,cheerful,disgruntled,fearful,gentle,lyrical,sad,serious,poetry-reading }
&speaker_speed={ slow or fast level: -10 to 10}
&speaker_volume={ low or high level: -10 to 10}
&speaker_tone={ pitch of the tone: -10 to 10 }
&text={ video script, plain text or with some tags}
&pos={the avatar position/scale on the background image or video}
&bgurl={ background image or video url}
&callback={callback this url at the end of video synthesis, video is ready for download}
Audio to Video API:
https://en.bomou.com/avatars/api?token={your token}
&model_id={ your custom AI avatar id}
&template_id={your custom AI avatar template/style id }
&audio={audio url}
&pos={the avatar position/scale on the background image or video}
&bgurl={ background image or video url}
&callback={callback this url at the end of video synthesis, video is ready for download}
Callback URL:
https://yourcallbackurl?downloadurl={the download url for generated video }
Overleaf Automation Bot to Appear Active (pyoverleafbot)
Overview
pyoverleafbot is built to help users appear active on collaborative Overleaf projects. It automates interactions with Overleaf.
Tech Stack
Python
Selenium
What can this be used for?
For PhD Students: Helps manage stress
Coffee Break: Set up the bot to run during breaks to keep your presence active.
Scheduled Runs: Schedule the bot to run at specific times to maintain activity on projects.
Why use Selenium rather than simple pyautogui?
Selenium is an automation tool
Selenium can interact directly with web applications in any browser
Selenium provides high level interaction with the web elements
Selenium can be used for web scraping
Selenium scripts can be used in Headless Mode
Implementations
Platforms:
Windows (Currently only implemented for Windows OS)
Robust Login:
Windows Credential Manager (using email & password) + Store session information for faster login
Gmail login
Browsers:
Chrome
Others
Scheduling (in progress):
Scheduled Runs when power on (logged in/logged out)
Scheduled Runs when power off
Functionalities:
Fully automated
Manage credentials securely
Select project you want to work on
Appear active
Headless Mode
Linux (in progress)
Mac (in progress)
One time setup
Prerequisites: Windows with Python and Chrome browser installed.
Clone the repository: git clone https://github.com/supersjgk/pyoverleafbot
cd pyoverleafbot
Install the dependencies: pip install -r requirements.txt
cd pyoverleafbot
Set Credentials in Windows Credential Manager by running: python credential_manager.py set OverleafBot
Run the script:
The Bot displays the available overleaf projects and prompts you to select a project: python script.py
To run the script in headless mode (Uses less resources, Browser GUI will not be displayed, the script still runs normally), use argument: --headless 1
If you already know the Overleaf Project ID: python script.py --project_id <Project ID>
If you want to appear active for x minutes (default 5 minutes), use the argument --duration <x>
The bot repeatedly selects a random line (every 5 to 10 seconds) in the project to appear active. To override default values, use the arguments: --min_change_time <min_seconds> --max_change_time <max_seconds>. It is recommended to set them to atleast 45 and 60, respectively to make it look more natural.
A simple multi-threaded distributed SSH brute-forcing tool written in Python.
How it Works
When the script is executed without the --proxies switch, it acts just like any other multi-threaded SSH brute-forcing scripts. When the --proxies switch is added, the script pulls a list (usually thousands) of SOCKS4 proxies from ProxyScrape and launch all brute-force attacks over the SOCKS4 proxies so brute-force attempts will be less likely to be rate-limited by the target host.
Installation
You can install OrbitalDump through pip.
pip install -U --user orbitaldump
orbitaldump
Alternatively, you can clone this repository and run the source code directly.
git clone https://github.com/k4yt3x/orbitaldump.git
cd orbitaldump
python -m orbitaldump
Usages
A simple usage is shown below. This command below:
-t 10: launch 10 brute-forcing threads
-u usernames.txt: read usernames from usernames.txt (one username per line)
-p passwords.txt: read passwords from passwords.txt (one password per line)
-h example.com: set brute-forcing target to example.com
--proxies: launch attacks over proxies from ProxyScrape
Download the Mantra GSC and unzip the xml files to /test/mantra/dataset, run
cd test/mantra
python test.py
MCSM
cd test/embeddings_reimplement
python mcsm.py
DDBRC
Only sampled data is provided.
cd test/diseasedb
python train.py your_embedding embedding_type freeze_or_not gpu_id
embedding_type should be in [bert, word, cui]
freeze_or_not should be in [T, F], T means freeze the embedding, and F means fine-tune the embedding
Citation
@article{YUAN2022103983,
title = {CODER: Knowledge-infused cross-lingual medical term embedding for term normalization},
journal = {Journal of Biomedical Informatics},
pages = {103983},
year = {2022},
issn = {1532-0464},
doi = {https://doi.org/10.1016/j.jbi.2021.103983},
url = {https://www.sciencedirect.com/science/article/pii/S1532046421003129},
author = {Zheng Yuan and Zhengyun Zhao and Haixia Sun and Jiao Li and Fei Wang and Sheng Yu},
keywords = {medical term normalization, cross-lingual, medical term representation, knowledge graph embedding, contrastive learning}
}
@inproceedings{zeng-etal-2022-automatic,
title = "Automatic Biomedical Term Clustering by Learning Fine-grained Term Representations",
author = "Zeng, Sihang and Yuan, Zheng and Yu, Sheng",
booktitle = "Proceedings of the 21st Workshop on Biomedical Language Processing",
month = may,
year = "2022",
address = "Dublin, Ireland",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.bionlp-1.8",
pages = "91--96",
abstract = "Term clustering is important in biomedical knowledge graph construction. Using similarities between terms embedding is helpful for term clustering. State-of-the-art term embeddings leverage pretrained language models to encode terms, and use synonyms and relation knowledge from knowledge graphs to guide contrastive learning. These embeddings provide close embeddings for terms belonging to the same concept. However, from our probing experiments, these embeddings are not sensitive to minor textual differences which leads to failure for biomedical term clustering. To alleviate this problem, we adjust the sampling strategy in pretraining term embeddings by providing dynamic hard positive and negative samples during contrastive learning to learn fine-grained representations which result in better biomedical term clustering. We name our proposed method as CODER++, and it has been applied in clustering biomedical concepts in the newly released Biomedical Knowledge Graph named BIOS.",
}