Author: shyy01fs4m8w

  • js-project-commons

    js-project-commons

    NPM Version CI Coverage Status Inline docs License Total Downloads

    Common web and node.js grunt tasks/lint configs/md templates and so on…

    Overview

    This library holds common build tasks, lint configuration and so on, used by my personal node.js and web projects.
    This allows for a central configuration for all projects.

    Usage

    In order to use this library, you need to refactor your build processes as defined in some of my projects such as simple-oracledb

    Installation

    In order to use this library, just run the following npm install command:

    npm install --save-dev js-project-commons

    API Documentation

    See full docs at: API Docs

    Contributing

    See contributing guide

    Release History

    Date Version Description
    2021-02-08 v1.3.2 Upgrading dependencies
    2020-05-11 v1.3.1 Support web projects without bower.json
    2020-05-11 v1.3.0 Update CI and ES6 syntax
    2019-10-07 v1.2.1 Update eslint rules
    2018-06-14 v1.1.11 Common karma config
    2018-06-13 v1.1.6 Karma browser to be defined in karma conf and not in grunt task
    2018-04-30 v1.1.3 node 10 support
    2018-04-23 v1.1.1 Update htmllint options
    2018-04-14 v1.1.0 Add support of node >=6 grunt tasks
    2018-01-17 v1.0.99 Remove lock files automatically
    2017-11-06 v1.0.98 Update mocha options
    2017-07-19 v1.0.96 Disable jslint by default
    2017-05-01 v1.0.89 Load env vars from file for build process
    2017-04-28 v1.0.83 Add support for simple dual web/node projects
    2017-04-24 v1.0.76 Added snyk validations
    2017-04-17 v1.0.72 Added YAML validations
    2017-04-11 v1.0.68 Modify stylelint rules configuration
    2017-02-13 v1.0.63 jsdoc2md fix for windows
    2017-02-11 v1.0.62 Added env validation for build
    2017-01-13 v1.0.60 No security validation for npm dependencies for web projects
    2017-01-13 v1.0.59 Adding file name linting
    2017-01-06 v1.0.57 Updated api docs to readme generation task
    2016-12-21 v1.0.54 Adding htmlhint to grunt build
    2016-12-16 v1.0.50 Adding stylelint to grunt build
    2016-12-15 v1.0.48 Adding security validations to grunt build
    2016-11-26 v1.0.44 Create github release grunt task
    2016-11-19 v1.0.34 Build dependencies are now packaged and loaded internally
    2016-09-21 v1.0.18 Added api docs to readme generation task
    2016-09-15 v1.0.17 jslint will test spec files
    2016-09-14 v1.0.15 jshint and jscs will test also spec files
    2016-09-13 v1.0.13 Adding html and css linting support
    2016-09-12 v1.0.10 eslint will test also spec files
    2016-09-10 v1.0.3 Grunt tasks are now grouped to node/web build configs
    2016-09-07 v1.0.1 Lazy loading of grunt task definitions
    2016-09-07 v1.0.0 Change hosted lint configurations
    2016-09-07 v0.0.2 Initial release.

    License

    Developed by Sagie Gur-Ari and licensed under the Apache 2 open source license.

    Visit original content creator repository https://github.com/sagiegurari/js-project-commons
  • plugin-Facets

    Visit original content creator repository
    https://github.com/DBinaghi/plugin-Facets

  • File-Sharing-System

    File Sharing System

    Basic Overview

    This repository aims at creating a Network Attached Storage(NAS) which would work for local wifi networks only at the moment.
    Though there are tons of tools and existing softwares which already do this, but we thought of taking this up as a side project just for the sake of learning.

    Details for Nerds

    The project has two parts :-

    1. NodeJS server which will have the NAS device
    2. Client Side React Web Application which will internally perform interactions with the server

    Question: Why did we choose NodeJS and React?

    Answer: You know the reasons :P.

    We will use webpack for bundling our source code for the Server as well as the Client.

    Question: Why use webpack?

    Answer: Honestly speaking, it is the only module bundler I know how to use. And also it’s pretty great!

    Setup config file and Storage folder

    Change the following in the config.json file:

    1. serverUrl – Mention the IP of the machine where server is running along with port (3001)
    2. storagePath – Mention the relative storage path from Server/source/modules

    Make sure you create a folder for storage before building the code

    Setup guide (Dev mode)

    Server

    1. yarn
    2. yarn dev
    3. yarn devServe

    Client

    1. yarn
    2. yarn dev

    Setup guide (Prod mode)

    Server

    1. yarn
    2. yarn prod
    3. yarn prodServe

    Client

    1. yarn
    2. yarn prod
    3. serve -s prod

    Run the React web app:

    In dev mode, appserver runs at http://localhost:8080
    In prod mode, appserver will show url after running serve -s prod. Use that url to run the app on any machine connected to your LAN.

    Developers:-

    Sparsha Saha

    Sourjya Mukherjee

    Visit original content creator repository
    https://github.com/SparshaSaha/File-Sharing-System

  • s3cmd-backup

    Simple s3cmd backup script

    This is a simple script that compresses a specified folder and loads it into an aws s3 bucket using s3cmd.

    Getting Started

    Prerequisites

    • Unix-like operating system
    • s3cmd is a command line tool that makes it possible to put/get files into/from a s3 bucket. Please make sure that s3cmd is installed and configured.
      Check the s3cmd installation guide here and run s3cmd --configure after installation.
    • zip or tar should be installed
    • A configured aws s3 bucket

    Installation

    via curl

    $ curl -Lo backup https://git.io/fhMJy

    via wget

    $ wget -O backup https://git.io/fhMJy

    via httpie

    $ http -do backup https://git.io/fhMJy

    via git clone

    $ git clone https://github.com/MoonLiightz/s3cmd-backup.git
    $ cd s3cmd-backup

    Note

    Don’t forget to give the script execution permissions.

    $ chmod +x backup

    Configuration

    To configure the script, edit the downloaded file with an editor of your choice like nano or something else. At the top of the file you will find some configuration options.

    Config Option Description
    BACKUP_PATH Path to the location without ending / of the folder which should be saved.
    Example: If you want to save the folder myData located in /root than you should set BACKUP_PATH="/root"
    BACKUP_FOLDER Name of the folder which should be saved.
    Example: Based on the previous example you should set BACKUP_FOLDER="myData"
    BACKUP_NAME Name of the backup file. The date on which the backup was created is automatically appended to the name.
    Example: If you set BACKUP_NAME="myData-backup" the full name of the backup is myData-backup_year-month-day_hour-minute-second
    S3_BUCKET_NAME Name of the s3 bucket where the backups will be stored.
    Important: The name of the bucket and not the Bucket-ARN
    Example: S3_BUCKET_NAME="mybucket"
    S3_BUCKET_PATH Path in the s3 bucket without ending / where the backups will be stored.
    Example: S3_BUCKET_PATH="/backups"
    COMPRESSION The compression which will be used. Available are zip and tar
    Example: For zip set COMPRESSION="zip" and for tar set COMPRESSION="tar"
    TMP_PATH Path to a location where files can be temporarily stored. The path must exist.
    Example: TMP_PATH="/tmp"

    Usage

    Basic

    The script supports the following functionalities.

    Create Backup

    This command creates a backup and loads it into the specified s3 bucket.

    $ ./backup create

    List Backups

    With this command you can list the backups stored in the s3 bucket.

    $ ./backup list

    Download Backup

    To download a backup from the s3 bucket to the server you can use this command.

    $ ./backup download <filename>

    Cron

    You can also execute the script with a cronjob. The following example creates a backup every night at 2 a.m.

    0 2 * * * <path_to_script>/backup create

    License

    s3cmd-backup is released under the MIT license.

    Visit original content creator repository
    https://github.com/MoonLiightz/s3cmd-backup

  • copier

    Copier

    Copier is a tool for copying files from damaged disks at low speed. A timeout can be specified to give-up when the file is uncopiable.
    All copied files are logged with a status.

    Copier has been tested on macOS and Windows.

    Requirements

    • Golang 1.7.x,1.8.x

    Installation

    Manual build

    Quick mode

    go get github.com/mdouchement/copier/cmd

    or

    1. Install Go 1.7 or above
    2. Install Glide dependency manager
    • go get -u github.com/Masterminds/glide
    1. Clone this project
    • git clone https://github.com/mdouchement/copier
    1. Fetch dependencies
    • glide install
    1. Build the binary
    • go build -o copier *.go
    1. Install the compiled binary
    • mv copier /usr/bin/copier

    Usage

    copier -h
    • Generate files list prior to edit it before launch the copy
    copier list -o /tmp/tobecopied.txt ~/Documents

    Unwanted line can be commented by adding a # at the start of the line.

    • Start the copy

    copier copy --speed 1MBps --timeout 1m --from-list /tmp/tobecopied.txt ~/backup
    
    Logging to /Users/mdouchement/backup/copier.log
    File /Users/mdouchement/Documents/.DS_Store: already exist
    File /Users/mdouchement/Documents/test.pdf
     4.59 MB / 7.13 MB [===================================>----------]  86.81% 1017.16 KB/s 12s

    License

    MIT

    Contributing

    1. Fork it
    2. Create your feature branch (git checkout -b my-new-feature)
    3. Commit your changes (git commit -am ‘Add some feature’)
    4. Push to the branch (git push origin my-new-feature)
    5. Create new Pull Request

    Visit original content creator repository
    https://github.com/mdouchement/copier

  • Geospatial-Analysis

    Geospatial-Analysis

    Objective

    The basic idea of analyzing the Zomato dataset is to get a fair idea about the factors affecting the establishment of different types of restaurants at different places in Bengaluru. This Zomato data aims at analyzing demography of the location. Most importantly it will help new restaurants in deciding their theme, menus, cuisine, cost, etc for a particular location. It also aims at finding similarity between neighborhoods of Bengaluru on the basis of food.

    Problem Statement

    Observations on the following are made:

    1. Top restaurant chains in Bangaluru
    2. How does the price differ between restaurants that accept online orders and those that don’t?
    3. How many restaurants offer table reservations compared to those that do not?
    4. Types of restaurants available
    5. Top-rated restaurants
    6. Restaurants located at various locations around Bangalore
    7. Approximate cost for 2 people
    8. How does the vote on restaurants accepting online orders compare to those refusing to accept them?
    9. In what restaurant does the most costly rate for two people exist? What is the dish involved? The most popular dish to eat there?
    10. Top ten most expensive and cheapest restaurants, based on an estimate for two people
    11. Restaurants under 500 (budget hotels)
    12. Budget-friendly restaurants with rating >4
    13. Overall number of restaurants that have ratings >4 and are under budget (less than 500)
    14. Hotels at various locations with affordable rates
    15. Foodie’s hotspots
    16. Heatmap of North Indian and South Indian restaurants
    17. Chains with the most popularity for casual dining
    18. Favorite dishes in various cuisines represented by a word cloud

    Dataset

    The dataset contains 17 columns as shown below:

    • url – url of the restaurant in the zomato website
    • address – address of the restaurant in Bengaluru
    • name – name of the restaurant
    • online_order – whether online ordering is available in the restaurant or not
    • book_table – table booking option available or not
    • rate – overall rating of the restaurant out of 5
    • votes – total number of rating for the restaurant as of the above mentioned date
    • phone – phone number of the restaurant
    • location – neighborhood in which the restaurant is located
    • rest_type – restaurant type
    • dish_liked – dishes people liked in the restaurant
    • cuisines – food styles, separated by comma
    • approx_cost(for two people) – approximate cost of meal for two people
    • reviews_list – list of tuples containing reviews for the restaurant
    • menu_item – list of menus available in the restaurant
    • listed_in(type) – type of meal
    • listed_in(city) – neighborhood in which the restaurant is listed

    Data Analysis Using Python

    Work flow of process:

    1. Data Collection
    2. Data Cleaning
    3. Performing EDA
    4. Performing Geospatial Analysis
    5. Performing Sentiment Analysis

    image

    Data Collection

    • The Dataset “ZOMATO BANGALORE RESTAURANTS” is publicly available on Kaggle website with 51,717 records and 17 attributes as shown under the dataset section.

    Data Cleaning

    • This is an essential step to perform before creating a visualization.
    • Clean, consistent data will be much easier to visualize.
    • As a result, missing values are filled, data are filtered accordingly, and inappropriate data are removed.

    Exploratory Data Analysis

    • There are different types of charts Bar, Pie, Line, Scatter Plot, Column chart etc. which can visually present the data in a more understandable way.
    • Below bar chart shows the most famous restaurant chains in Bangalore with number of outlets.

    image

    • The following pie chart shows the percentage of online orders accepted by restaurants.

    image

    • The below figure represents the bar chart for different types of restaurants.

    image

    • Bar graph of different varieties of cuisines in Bangalore.

    image

    • Below scatter plot with X axis denotes the ratings of the restaurants and Y axis denotes the approximate cost for 2 people.

    image

    • Box plot depicting the price difference between restaurants that accept online orders and those that do not

    image

    Geospatial Analysis

    • Geospatial Analysis is useful for locating the geographical area in a particular region.

    Heatmap of Restaurants in Bengaluru city

    • For locating the restaurants in geographical map, we need latitudes, longitudes and count of restaurants.
    • Extract the “Latitude” and “Longitude” w.r.t. different Locations using Python’s Geopy library.
    • Generate a “BaseMap” of Bangalore using Python’s Folium library.

    geo analysis

    • Plot a HeatMap based on variety of use cases with the help of Python’s Folium “HeatMap” Plugins.
    • The heatmap below depicts the clutter of restaurants in Bengaluru.

    heatmap of blore

    Heatmap of North Indian restaurants

    hm of ni

    Sentiment Analysis

    • Here are the Wordclouds developed using the built-in function in python called “WordCloud” for 9 different types of restaurants where customers left feedback.
    • To generate the below pictured wordclouds using Python, feedbacks are preprocessed, null values are dropped and all characters and spaces are removed except alphabets.

    image

    image

    image

    Tools Used

    Jupyter Notebook Python Pandas NumPy Matplotlib Plotly

    • Jupyter Notebook is used as IDE.
    • Among the Python libraries, Pandas and NumPy are used for handling data, preprocessing, and mathematical operations, respectively.
    • Plotly, Seaborn, and Matplotlib are used for visualizing plots.

    For more details, please go through the Jupyter Notebook attached above.

    Conclusion

    • Cafe Coffee Day dominates the restaurant chain landscape followed by Onesta and then Empire.
    • Online orders are accepted by 64.4% of restaurants, whereas 35.6% of restaurants do not accept them.
    • The city of Bangalore is known as a high-tech hub of India, and people who live a busy and modern life are inclined to choose Quick Bites.
    • The most common cuisines are North Indian, Chinese, and South Indian. Bangalore is therefore influenced more by the cultures of the north than those of the south.
    • Having reviewed the above scatterplot, we can conclude that most of the highest-rated restaurants accept online orders and are budget-friendly as well.
    • In the box plot, it can be seen that there is a discrepancy between the median number of votes for both categories. The Zomato application gives customers the option to rate restaurants after they’ve ordered through it. This will lead to more votes for the restaurants accepting online orders.
    • The majority of the restaurants are priced under 1000, which means they are affordable and few are luxurious.
    • The most no. of eateries are found in BTM, HSR, and Koranmangala 5th block. BTM dominates the section by having more than 4000 restaurants.
    • It is evident that eateries are primarily located in the central Bangalore region. As we get farther from the center of the city, the number of restaurants decreases. Therefore, prospective restaurateurs can consult this to identify suitable places for their business.

    Check out the notebook above to learn more

    Visit original content creator repository https://github.com/NiveditaSureshK/Geospatial-Analysis
  • eventarbiter

    Build Status

    eventarbiter


    Kubernetes emits events when some important things happend internally.

    For example, when the CPU or Memory pool Kubernetes cluster provides can not satisfy the request application made, an FailedScheduling event will be emitted and the message contained in the event will explain what is the reason for the FailedScheduling with event message like pod (busybox-controller-jdaww) failed to fit in any node\nfit failure on node (192.168.0.2): Insufficient cpu\n or pod (busybox-controller-jdaww) failed to fit in any node\nfit failure on node (192.168.0.2): Insufficient memory\n.

    Also, if the application malloc a lot of memory which exceeds the limit watermark, kernel OOM Killer will arise and kill processes randomly. Under this circumstance, Kubernetes will emits an SystemOOM event with event message like System OOM encountered.

    Note that we may use various monitor stack for Kubernetes and we can send an alarm if the average usage of memory exceeds the 80 percent of limit in the past two minutes. However, if the memory malloc operation is done in a short duration, the monitor may not work properly to send an alarm on it for that the memory usage will rise up highly in a short duration and after that it will be killed and restarted with memory usage being normal. Resource fragment exists in Kubernetes cluster. We may encounter a situation that the total remaining memory and cpu pool can satisfy the request of application but the scheduler can not schedule the application instances. This is caused that the remaining cpu and memory resource is split across all the minion nodes and any single minion can not make cpu or memory resource for the application.

    Something that can not be handled by monitor can be handled by events. eventarbiter can watch for events, filter out events indicating bad status in Kubernetes cluster.

    eventarbiter supports callback when one of the listening events happends. eventarbiter DO NOT send event alarms for you and you should do this using yourself using callback.

    Comparison


    There are already some projects to do somthing about Kubernetes events.

    • Heapster has a component eventer. eventer can watch for events for a Kubernetes cluster and supports ElasticSearch, InfluxDB or log sink to store them. It is really useful for collecting and storing Kubernetes events. We can monitor what happends in the cluster without logging into each minion. eventarbiter also import the logic of watching Kubernetes from eventer.
    • kubewatch can only watch for Kubernetes events about the creation, update and delete for Kubernetes object, such as Pod and ReplicationController. kubewatch can also send an alarm through slack. However, kubewatch is limited in the events can be watched and the limited alarm tunnel. With eventarbiter‘s callback sink, you can POST the event alarm to a transfer station. And after that you can do anything with the event alarm, such as sending it with email or sending it with PagerDuty. It is on your control. 🙂

    Event Alarm Reason


    Event Description
    node_notready occurs when a minion(kubelet) node changed to NotReady
    node_notschedulable occurs when a minion(kubelet) node changed status to SchedulableDisabled
    node_systemoom occurs when a an application is OOM killed on a ‘minion'(kubelet) node
    node_rebooted occurs when a minion(kubelet) node is restrated
    pod_backoff occurs when an container in a pod can not be started normally. In our situation, this may be caused by the image can not be pulled or the image specified do not exist
    pod_failed occurs when an container in the pod can not be started normally. In our situation, this may be caused by the image can not be pulled or the image specified do not exist
    pod_failedsync occurs when an container in the pod can not be started normally. In our situation, this may be caused by the image can not be pulled or the image specified do not exist
    pod_failedscheduling occurs when an application can not be scheduled in the cluster
    pod_unhealthy occurs when the pod health check failed
    npd_oomkilling occurs when OOM happens
    npd_taskhung occurs when task hangs for /proc/sys/kernel/hung_task_timeout_secs(mainly used for docker ps hung)

    Note


    • For more info about npd_oomkilling and npd_taskhung, you should deploy node-problem-detector in your Kubernetes cluster.

    Usage


    Just like eventer in Heapster project. eventarbiter supports the source and sink command line arguments.

    • Argument
      • source
      • sink argument, the usage is like eventer sink. eventerarbiter supports stdout and callback.
        • stdout can log the event alarm to stdout with json format.
        • callback is a HTTP API with POST method enabled. The event alarm will be POSTed to the callback URL.
          • --sink=callback:CALLBACK_URL
          • CALLBACK_URL should return HTTP 200 or 201 for success. All other HTTP return status code will be considered failure.
      • environment
        • a comma separated key-value pairs as an Environment map field in event alert object. This can be used as a context to pass whatever you want.
      • event_filter
        • Event alarm reasons specified in event_filter will be filtered out from eventarbiter.

    The normal commands to start an instance of eventerarbiter will be

    • dev
      • eventarbiter -source='kubernetes:http://127.0.0.1:8080?inClusterConfig=false' -logtostderr=true -event_filter=pod_unhealthy -max_procs=3 -sink=stdout
    • production
      • eventarbiter -source='kubernetes:http://127.0.0.1:8080?inClusterConfig=false' -logtostderr=true -event_filter=pod_unhealthy -max_procs=3 -sink=callback:http://127.0.0.1:3086
      • There is also a faked http service in script/dev listening in 3086 with / endpoints.

    Build


    • make build
      • Note: eventarbiter requires Go1.7
    Visit original content creator repository https://github.com/andyxning/eventarbiter
  • space-truckers

    Space-Truckers: The Video Game

    A game of getting stuff from Point A to Point B… IN SPAAAACCE!

    Space-Truckers is an OSS project intended to demonstrate key concepts of integrating the BabylonJS WebGL/WebGPU framework into a web-based interactive application.

    space trucker concept art

    About the game

    Gameplay in Space-Truckers is divided into three distinct phases: planning, driving, and scoring.

    In the planning phase, your simulated cargo container (a.k.a. your trailer) starts in orbit around one of the system’s planets. The overall goal is to plan a course that will take the cargo pod to its’ destination – or at least close enough to intersect the destination planet’s retrieval systems, but you won’t have the benefit of being able to make course changes once you’ve launched on your journey – say a prayer to Sir Isaac Newton, because it is the gravitational forces of the star and its’ attendant planets that will bend and alter the ballistic path of your cargo post-launch!

    Before launch though, you’ll be able to specify the precise direction, force, and timing of your cargo so you can line up the perfect route. Better routes are ones that have a higher potential score. The potential score is determined by a number of factors, including the length of the route (longer routes have more opportunity to gain score, but risk losing even more points in time penalties), the amount of time in transit, average speed, and more.

    During planning, the simulation can be reset as many times as needed – that’s why it’s a simulation, after all! When you’ve launched on a successful route, you’ll have the option to either accept the route or reset and try again. Accepting the route takes you to the next game phase, where you’ll ride along with your cargo in your Space-Tractor, helping to nudge and guide it through a series of challenges encountered along the route.

    Once it’s all said and done, your potential score will be displayed along with the actual score earned from the driving phase. Maybe you’ll make the leaderboards someday!

    How to Play

    Menus

    Key(s) Action
    ↑,W Move selection up
    ↓,S Move selection down
    Enter/Return Confirm/Invoke selection
    Backspace/Delete Cancel/Go back
    Spacebar Skip cut scene (where applicable)

    Route Planning

    Key(s) Action
    WASD Aim
    Move camera
    Spacebar, Enter Launch
    Shift Increase launch velocity
    Ctrl Decrease launch velocity
    Spacebar, Enter Confirm route
    Backspace, Delete Retry
    P Pause

    Driving

    Key(s) Action
    W Apply forward accelleration (speed up)
    S Decelerate along forward axis (slow down)
    A Left Translate
    D Right Translate
    Rotate Left
    Rotate Right
    Translate Up
    Translate Down
    P Pause
    Del Reset

    Building the Application from Source

    Although Space-Truckers is built to run in any browser capable of using WebGL and related JavaScript API’s, there are a few more requirements involved if you want to build the application and game from source code. You’ll need:

    • NodeJS v14+
    • NPM to match

    Once you’ve cloned the source to your local machine, you should run an npm install to fetch and install needed dependencies. The /dist folder will contain the output of running npm run build, but for local development the npm run start command will run the webpack dev-server, which allows for module hot swapping and reloading, greatly speeding up the time between making a change and seeing it reflected in a browser!

    Concepts

    Design docs and sketchs are located in the /design/ folder.

    Getting Help and Providing Feedback

    There are a number of different ways to get assistance with an issue you may encounter. Have a question about the game? Head over to the discussion boards and post your question there among the various topics available, or create your own.

    If you encounter a bug or issue with the game or application you can create an Issue to help us track it, or add a comment to an existing issue that might help us understand the problem better.

    Thanks for participating!

    Conceptual sketches

    mass-driver concept


    cargo pod concept


    cabin chase concept

    Visit original content creator repository https://github.com/jelster/space-truckers
  • space-truckers

    Space-Truckers: The Video Game

    A game of getting stuff from Point A to Point B… IN SPAAAACCE!

    Space-Truckers is an OSS project intended to demonstrate key concepts of integrating the BabylonJS WebGL/WebGPU framework into a web-based interactive application.

    space trucker concept art

    About the game

    Gameplay in Space-Truckers is divided into three distinct phases: planning, driving, and scoring.

    In the planning phase, your simulated cargo container (a.k.a. your trailer) starts in orbit around one of the system’s planets.
    The overall goal is to plan a course that will take the cargo pod to its’ destination – or at least close enough to intersect the destination
    planet’s retrieval systems, but you won’t have the benefit of being able to make course changes once you’ve launched on your journey – say a prayer to
    Sir Isaac Newton, because it is the gravitational forces of the star and its’ attendant planets that will bend and alter the ballistic path of your cargo post-launch!

    Before launch though, you’ll be able to specify the precise direction, force, and timing of your cargo so you can line up the perfect route. Better routes are ones that have a higher potential score. The potential score is determined by a number of factors, including the length of the route (longer routes have more opportunity to gain score, but risk losing even more points in time penalties), the amount of time in transit, average speed, and more.

    During planning, the simulation can be reset as many times as needed – that’s why it’s a simulation, after all! When you’ve launched on a successful route, you’ll have the option to either accept the route or reset and try again. Accepting the route takes you to the next game phase, where you’ll ride along with your cargo in your Space-Tractor, helping to nudge and guide it through a series of challenges encountered along the route.

    Once it’s all said and done, your potential score will be displayed along with the actual score earned from the driving phase. Maybe you’ll make the leaderboards someday!

    How to Play

    Menus

    Key(s) Action
    ↑,W Move selection up
    ↓,S Move selection down
    Enter/Return Confirm/Invoke selection
    Backspace/Delete Cancel/Go back
    Spacebar Skip cut scene (where applicable)

    Route Planning

    Key(s) Action
    WASD Aim
    Move camera
    Spacebar, Enter Launch
    Shift Increase launch velocity
    Ctrl Decrease launch velocity
    Spacebar, Enter Confirm route
    Backspace, Delete Retry
    P Pause

    Driving

    Key(s) Action
    W Apply forward accelleration (speed up)
    S Decelerate along forward axis (slow down)
    A Left Translate
    D Right Translate
    Rotate Left
    Rotate Right
    Translate Up
    Translate Down
    P Pause
    Del Reset

    Building the Application from Source

    Although Space-Truckers is built to run in any browser capable of using WebGL and related JavaScript API’s, there are a few more requirements involved if you want to build the application and game from source code. You’ll need:

    • NodeJS v14+
    • NPM to match

    Once you’ve cloned the source to your local machine, you should run an npm install to fetch and install needed dependencies. The /dist folder will contain the output of running npm run build, but for local development the npm run start command will run the webpack dev-server, which allows for module hot swapping and reloading, greatly speeding up the time between making a change and seeing it reflected in a browser!

    Concepts

    Design docs and sketchs are located in the /design/ folder.

    Getting Help and Providing Feedback

    There are a number of different ways to get assistance with an issue you may encounter. Have a question about the game? Head over to the discussion boards and post your question there among the various topics available, or create your own.

    If you encounter a bug or issue with the game or application you can create an Issue to help us track it, or add a comment to an existing issue that might help us understand the problem better.

    Thanks for participating!

    Conceptual sketches

    mass-driver concept


    cargo pod concept


    cabin chase concept

    Visit original content creator repository
    https://github.com/jelster/space-truckers

  • uniex

    OVERVIEW

    Go Reference Go Report Card Go Build License SemVer
    built with nix

    paepcke.de/uniex

    UNIEX

    • Exports UNIFI Controller Inventory Database (mongoDB) to [csv|json]
    • Adds missing attributes (latest used hostname, latest used ip, latest seen, …) via parsing all device and stat records
    • Converts Timestamps (eg. unix nano time) into RFC3339 (parseable by excel, human readable) format
    • Fast, even for large corp inventory (in-memory, parallel processing)

    SCREENSHOT CLI

    UNIEX SAMPLE SCREENSHOT

    ⚡️QUICK START

    go run paepcke.de/uniex/cmd/uniex@main
    

    ⚡️PRETTY PRINT CSV OUTPUT VIA BAT

    go run paepcke.de/uniex/cmd/uniex@main | bat -l csv
    

    ⚡️PRETTY PRINT JSON OUTPUT VIA JQ

    UNIEX_FORMAT=json go run paepcke.de/uniex/cmd/uniex@main | jq
    

    ⚡️HOW TO INSTALL

    go install paepcke.de/uniex/cmd/uniex@main
    

    ⚡️PRE-BUILD BINARIES (DOWNLOAD)

    https://github.com/paepckehh/uniex/releases

    SUPPORTED OPTIONS

    # Optional (via env variables)
    - UNIEX_MONGODB   - mongodb uri, default: mongodb://127.0.0.1:27117
    - UNIEX_FORMAT    - export format, default: csv [csv|json]
    

    DOCS

    pkg.go.dev/paepcke.de/uniex

    🛡 License

    License

    This project is licensed under the terms of the BSD 3-Clause License license. See LICENSE for more details.

    📃 Citation

    @misc{uniex,
      author = {Michael Paepcke},
      title = {Export UNIFI MongoDB Inventory Database},
      year = {2024},
      publisher = {GitHub},
      journal = {GitHub repository},
      howpublished = {\url{https://paepcke.de/uniex}}
    }

    CONTRIBUTION

    Yes, Please! PRs Welcome!

    Visit original content creator repository https://github.com/paepckehh/uniex