Blog

  • emacs.d

    Visit original content creator repository
    https://github.com/jhpx/emacs.d

  • handwriting-synthesis

    Handwriting Synthesis

    Implementation of the handwriting synthesis experiments in the paper Generating Sequences with Recurrent Neural Networks by Alex Graves. The implementation closely follows the original paper, with a few slight deviations, and the generated samples are of similar quality to those presented in the paper.

    Web demo is available here.

    Usage

    lines = [
        "Now this is a story all about how",
        "My life got flipped turned upside down",
        "And I'd like to take a minute, just sit right there",
        "I'll tell you how I became the prince of a town called Bel-Air",
    ]
    biases = [.75 for i in lines]
    styles = [9 for i in lines]
    stroke_colors = ['red', 'green', 'black', 'blue']
    stroke_widths = [1, 2, 1, 2]
    
    hand = Hand()
    hand.write(
        filename='img/usage_demo.svg',
        lines=lines,
        biases=biases,
        styles=styles,
        stroke_colors=stroke_colors,
        stroke_widths=stroke_widths
    )

    Currently, the Hand class must be imported from demo.py. If someone would like to package this project to make it more usable, please contribute.

    A pretrained model is included, but if you’d like to train your own, read these instructions.

    Demonstrations

    Below are a few hundred samples from the model, including some samples demonstrating the effect of priming and biasing the model. Loosely speaking, biasing controls the neatness of the samples and priming controls the style of the samples. The code for these demonstrations can be found in demo.py.

    Demo #1:

    The following samples were generated with a fixed style and fixed bias.

    Smash Mouth – All Star (lyrics)

    Demo #2

    The following samples were generated with varying style and fixed bias. Each verse is generated in a different style.

    Vanessa Carlton – A Thousand Miles (lyrics)

    Demo #3

    The following samples were generated with a fixed style and varying bias. Each verse has a lower bias than the previous, with the last verse being unbiased.

    Leonard Cohen – Hallelujah (lyrics)

    Contribute

    This project was intended to serve as a reference implementation for a research paper, but since the results are of decent quality, it may be worthwile to make the project more broadly usable. I plan to continue focusing on the machine learning side of things. That said, I’d welcome contributors who can:

    • Package this, and otherwise make it look more like a usable software project and less like research code.
    • Add support for more sophisticated drawing, animations, or anything else in this direction. Currently, the project only creates some simple svg files.
    Visit original content creator repository https://github.com/sjvasquez/handwriting-synthesis
  • Kuma-Service-Mesh-Observability

    MICROSERVICES OBSERVABILITY With SERVICES MESH and NEW RELIC ONE (NR1)

    Solution

    In this tutorial, we will provide you the fundamentals of Kuma Service Mesh, NR1, K8s – Minikube and OSS Metrics, Tracing and Logging

    Kuma Mesh Architecture

    Why should you care?

    ====================================

    1. Fragmentation of Traces, Metrics and Logs
    2. Reduce the MTTXs for degradations/exceptions with your microservices
    3. Minimize hops and Dev/Ops resource fatigue optimization (1 O11y console vs 7 GUIs)
    4. Have a monitor for your monitoring Infrastructure (Airbus A380 metaphor)

    Let’s see a demo NR1 Service Mesh Demo

    INTRODUCTIONS

    ====================================

    Upon cloning, you will find the necessary files to get Kuma Service Mesh up and running in a K8s cluster (Minikube used here)

    When running on Kubernetes, Kuma will store all of its state and configuration on the underlying Kubernetes API Server, therefore requiring no dependency to store the data.

    Table of Contents

    • New Relic instrumented Kuma Marketplace full-stack app (Vue.js Browser Service, NodeJS App service, PostgreSQL and Redis)

    • Docker hub public image used: ** monitorjain/kuma-demo-frontend:v3 ** monitorjain/kuma-demo-backend:latest

    • Kuma Service Mesh (Deployment How-tos)

    • New Relic One (Deployment How-tos)

    • Deploy New Relic Helm Chart for K8s (deploys prometheus, logs, traces, metadata injector, daemonset, native events collector etc)

    Environment Setup pre-reqs


    1.0 Setup Minikube K8s Cluster

    ====================================


    1.2 Start your Minikube K8s Cluster


    minikube start --vm-driver=hyperkit -p kuma-demo --cpus=3 --memory=8192 
    
    • Caution verify the compute power on your laptop before allocating 3 CPUs and 8 Gb memory

    • Wait until you see this output: Done! kubectl is now configured to use “kuma-demo”

    • Note: You may also leverage EKS, GKE or AKS.

    1.3 Deploy the Marketplace full stack app (Vue.js Browser, NodeJS App, Redis and PostgreSQL)


    Before continuing with the deployment, let’s build a docker image with your New Relic Browser Agent baked-in. The nodejs app is pre-baked with the new relic application agent. This same agent will also monitor Redis and PostgreSQL DB transactions.

    Lets get the Vue.js app dockerized with your New Relic browser agent, click link to get started.

    Continue

    ===============================================

    CAUTION: Do not continue this sub-stage before creating your NR instrumented Vue.js browser app docker image.

    Welcome back, now, let’s add your unique New Relic License Key to the K8s deployment YAML below.

    App Architecture

    • cd ../full_stack_app_with_NR_FSO/

    • vim or nano kuma-aio.yaml

    • Run the following command to deploy the marketplace application, which points to the all-in-one YAML file provided in this directory:

    • To get the license key, on a new tab – visit the NR1 settings page

        Once you grab the license key from NR1 Settings page, add the new relic license key: 
      
        You may grab your license key from this and replace in the yaml file:
          name: NEW_RELIC_LICENSE_KEY
          value: "<INSERT_LICENSE_KEY>"
      
        Repeat the license key input step 4 times over
      
        Next, replace the frontend docker app image name with the image you created in the sub-tutorial (NR browser instrumented docker frontend Vue.js app image) section
      
        This will look like:
          cd full_stack_app_with_NR_FSO/
          vi kuma-aio.yaml
          replace image: monitorjain/kuma-demo-frontend:v3 with image: <hub_user>/<your image name>:<version>
      
        Next, lets deploy the full stack app on the Minikube or cloud K8s cluster with the following command:
          $ kubectl apply -f kuma-aio.yaml
      
    • And then verify that the pods are up and running by executing the following command:

        $ kubectl get pods -n kuma-demo
      
        EXPECTED OUTPUT:
        NAME                                   READY   STATUS    RESTARTS   AGE
        kuma-demo-app-69c9fd4bd-4lkl7          1/1     Running   0          40s
        kuma-demo-backend-v0-d7cb6b576-nrl67   1/1     Running   0          40s
        postgres-master-65df766577-qqqwr       1/1     Running   0          40s
        redis-master-78ff699f7-rsdfk           1/1     Running   0          40s
      
    • If you are on a cloud K8s, an external IP and Port will be auto-generally (Frontend Service is LoadBalancer Type), if you’re on a Minikube, simply port-forward your frontend service:

        kubectl port-forward service/frontend -n kuma-demo 8080
        Forwarding from 127.0.0.1:8080 -> 8080
        Forwarding from [::1]:8080 -> 8080  
      
    • This marketplace application is currently running WITHOUT Kuma. So all traffic is going directly between the services, and not routed through any dataplanes. In the next step, we will download Kuma and quickly deploy the mesh alongside an existing application.

    2.0 Setup KUMA Service Mesh

    ====================================


    • Run the following curl script to automatically detect the OS and download kuma

        curl -L https://kuma.io/installer.sh | sh -
      
    • On K8s, KUBERNETES, Run the following command:

        cd kuma-0.7.1/bin && ls
      
        $ kumactl install control-plane | kubectl apply -f -
        
        $ kubectl get pods -n kuma-system (validate that kuma control plane is deployed and running)
        
        $ kubectl delete pods --all -n kuma-demo (delete existing pods for sidecar injector to kick-in)
        
        $ kubectl get pods -n kuma-demo -w (this time you'll observe multi-container pods - viola, envoy is ready!)
        
        $ kubectl port-forward service/frontend -n kuma-demo 8080 (port forward again - only for Minikube)
      
    • When running on K8s, there is no external dependency since its written in Go and has no external dependencies (leverages underlying API server to store its config – universal and straight forward to deploy)

    • Now we are ready to deploy some dataplane (envoy proxy side cars: fret not the annotations are included in my kuma-aio.yaml)

    • NEXT STEPS: You can now explore the Kuma GUI on port 5681!

        kubectl port-forward service/kuma-control-plane -n kuma-system 5681 (to access KUMA GUI: http://localhost:5681/gui)
      
    • Next, configure kumactl to point to the address where the HTTP API server sits:

        $ ./kumactl config control-planes add --name=minikube --address=http://localhost:5681
      
    • Inspect the dataplanes by leveraging

        $ ./kumactl inspect dataplanes (via CLI)
      
        added Control Plane "minikube"
        
        switched active Control Plane to "minikube"
      
        http://localhost:5681/gui (visually)
      
        https://github.com/kumahq/kuma-gui - the GUI is open source
      

    INTEGRATIONS

    ====================================

    KONG GATEWAY BONUS (OPTIONAL)

    • Command to install Kong API G/W $ kubectl apply -f https://bit.ly/demokumakong

        $ kubectl get pods -n kuma-demo (validation step)
      
        export PROXY_IP=$(minikube service -p kuma-demo -n kuma-demo kong-proxy --url | head -1) (To point mkt place requests at Kong)
      
        echo $PROXY_IP (e.g. http://192.168.64.49:31553)
      
        Add an Ingress Rule
        $ cat <<EOF | kubectl apply -f - 
        apiVersion: extensions/v1beta1
        kind: Ingress
        metadata:
        name: marketplace
        namespace: kuma-demo
        spec:
        rules:
        - http:
            paths:
            - path: /
                backend:
                serviceName: frontend
                servicePort: 8080
        EOF
      
        Now, hit the PROXY_IP URL (i.e. the Ingress Kong API G/W address), the marketplace app should be available there. 
      

    3.0 NEW RELIC CENTRALIZED O11Y SETUP

    ====================================

    • Click on K8s instrumentation option via User menu Setup K8s 0

    • Fill in the required details and config attributes Setup K8s 1

    • Next, copy helm chart deploy commands to clipboard Setup K8s 2

    • In case, if you don’t have helm installed on your workstation, brew install helm

    • Last, validate that the data is received Setup K8s 3

    • Note: Your traffic permissions may block data off – skim through your Service Mesh config and settings via the GUI

    4.0 NEW RELIC ACTIVATION STEPS

    ====================================

    5.0 NEW RELIC ADD-ONS

    ====================================

    • Ready to use alerts and dashboard templates for Terraform

    • Ready to use alert & dashboard JSONs

    • Walk through video

    • Please raise an issue if you have recommendations for improvements

    • Kuma Mesh – Logging, Tracing and Metrics collection setup

    • Advanced Kuma deployment – Global and multi-zone mode

    • Advance Kuma specific monitoring templates and alerting strategy

    • For leveraging OSS natively on a Mesh – https://github.com/kumahq/kuma-demo/tree/master/kubernetes#prometheus-and-grafana

    OTHER CRITICAL USE-CASES

    ====================================

    • Get Meshes

        $ ./kumactl get meshes
      
    • Activate MTLS

        $ cat <<EOF | kubectl apply -f - 
        apiVersion: kuma.io/v1alpha1
        kind: Mesh
        metadata:
        name: default
        spec:
        mtls:
            enabledBackend: ca-1
            backends:
            - name: ca-1
            type: builtin
        metrics:
            enabledBackend: prometheus-1
            backends:
            - name: prometheus-1
            type: prometheus
        EOF
      
    • Validation step

        $ ./kumactl get meshes
        NAME      mTLS           METRICS                   LOGGING   TRACING   AGE
        default   builtin/ca-1   prometheus/prometheus-1   off       off       24m
      
    • Traffic permission

        $ cat <<EOF | kubectl apply -f - 
        apiVersion: kuma.io/v1alpha1
        kind: TrafficPermission
        mesh: default
        metadata:
        namespace: kuma-demo
        name: everything
        spec:
        sources:
        - match:
            kuma.io/service: '*'
        destinations:
        - match:
            kuma.io/service: '*'
        EOF
      
    • Delete free flowing traffic permission (Stop fake spamming reviews from being submitted into Redis)

        $ kubectl delete trafficpermission -n kuma-demo --all
      
    • Scale Replicas – backend-v1 and backend-v2 were deployed with 0 replicas so let’s scale them up to one replica to see how traffic routing works:

        $ kubectl scale deployment kuma-demo-backend-v1 -n kuma-demo --replicas=1
      
      
        $ kubectl scale deployment kuma-demo-backend-v2 -n kuma-demo --replicas=1
      
    Visit original content creator repository https://github.com/njain1985/Kuma-Service-Mesh-Observability
  • ZXStructs-Swift

    ZXStructs-Swift

    New Version

    Installation

    ZXStructs is available through CocoaPods. To install it, simply add the following line to your Podfile:

    pod "ZXStructs"

    Author

    JuanFelix, hulj1204@yahoo.com

    通用UI配置,通用工具类

    空余时间完善中….

    • Tags:0.2

    UIConfig

    1、主体颜色配置使用

    • ZXTintColorConfig.plist

    colorConfig

    • 使用:
    self.view.backgroundColor = UIColor.zx_backgroundColor
    UIColor.zx_tintColor
    UIColor.zx_borderColor
    
    • ...

    2、字体字号配置使用

    • ZXFontConfig.plist

    fontConfig

    • 使用:
    颜色:
    UIColor.zx_textColorTitle
    UIColor.zx_textColorBody	
    UIColor.zx_textColorMark
    
    字体:
    self.lbTitle.font   = UIFont.zx_titleFont (zx_bodyFont,zx_markFont)
    self.lbTitle.font	= UIFont.zx_titleFont(20)
    
    iconfont:
    self.lbIconFont.font    = UIFont.zx_iconFont(30)
    self.lbIconFont.text    = "IconFont,\u{e616}"
    
    字号:
    UIFont.zx_titleFontSize
    
    • ...

    • 颜色、字体效果图

    FIG1
    fig1

    3、NavigationBar配置使用

    • ZXNavBarConfig.plist

    fontConfig

    • 启用配置:
    AppDelegate:
    ZXStructs.loadnavBarConfig()
    
    扩展(设置navbar 左右两侧按钮):
    • 来自iconfont
    self.zx_navbarAddBarButtonItems(iconFontTexts: ["\u{e612}","\u{e613}"], fontSize: 30, color: UIColor.orange, at: .left)
    
    • 来自纯文本
    self.zx_navbarAddBarButtonItems(textNames: ["Call"], font: nil, color: UIColor.white, at: .right)
    
    • 来自图片
    self.zx_navbarAddBarButtonItems(imageNames: ["r1","r2"], useOriginalColor: true, at: .right)
    
    • 自定义视图
    self.zx_navbarAddBarButtonItem(customView: view, at: .right)
    
    • 事件
    override func zx_leftBarButtonAction(index: Int) {
    	print("Left Action At Index:\(index)")
    }
    
    override func zx_rightBarButtonAction(index: Int) {
    	print("Right Action At Index:\(index)")
    }
        
    
    • 修改颜色
    self.zx_setnavbarBackgroundColor(UIColor(red: r, green: g, blue: b, alpha: 1.0))
    
    • ...

    • NavBar 效果图

    NAV1 NAV2
    nav1 nav2
    NAV3 NAV4
    nav3 nav4

    4、Tabbar配置使用

    • ZXNavBarConfig.plist

    fontConfig

    • 启用配置:
    AppDelegate:
    ZXStructs.loadtabBarConfig()
    
    扩展(添加controller)
    1.按钮从plist文件读取:
    tabBar?.zx_addChildViewController(demoVC, fromPlistItemIndex: 0)
    
    2.代码配置
    let itemInfo                = ZXTabbarItem()
    //Item 标题
    itemInfo.title              = "不一样"
    //未选中图片
    itemInfo.normalImage        = "tabbarIcon4-normal"
    //选中图片
    itemInfo.selectedImage      = "tabbarIcon4-selected" 
    //点击tabbar按钮是否present出来
    itemInfo.showAsPresent      = false
    //是否嵌入一层navigationcontroller
    itemInfo.embedInNavigation  = true
    tabBar?.zx_addChildViewController(ZXHHViewController(), fromItem: itemInfo)
    
    3.系统方法
    tabBar?.addChildViewController(vc)
    
    • 修改颜色
    self.zx_settabbarBackgroundColor(UIColor.clear)
    

    如果 showAsPresent = true ,必须实现代理方法

    tabBar.delegate = self
    extension AppDelegate: UITabBarControllerDelegate {
        func tabBarController(_ tabBarController: UITabBarController, shouldSelect viewController: UIViewController) -> Bool {
            return UITabBarController.zx_tabBarController(tabBarController,shouldSelectViewController:viewController)
        }
    }
    
    • TabBar 效果图
    有颜色
    tab1
    无颜色
    tab1
    • ...

    CommonUtils

    • ZXImagePickerUtils
    //相册取
    imagePicker.choosePhoto(presentFrom: self) {[unowned self] (image, status) in
        if status == .success {
            self.imgView.image = image
        }else{
            if status == .denied {
                ZXImagePickerUtils.showTips(at: self, type: .choosePhoto)
            }else{
                ZXAlertUtils.showAlert(withTitle: "提示", message: status.description())
            }
        }
    }
    //拍照
    imagePicker.takePhoto(presentFrom: self) { [unowned self] (image, status) in
        if status == .success {
            self.imgView.image = image
        }else{
            if status == .denied {
                ZXImagePickerUtils.showTips(at: self, type: .takePhoto)
            }else {
                ZXAlertUtils.showAlert(withTitle: "提示", message: status.description())
            }
        }
    }
            
    
    • ZXAlertUtils
    ZXAlertUtils.showAlert(withTitle: "提示", message: errorMsg!)
    
    • ZXLocationUtils
    ZXLocationUtils.shareInstance.checkCurrentLocation(completion: { (status, location) in
        if status == .success,let location = location {
            print("latitude:\(location.coordinate.latitude),longitude:\(location.coordinate.longitude)")
        }else{
            print(status.description())
        }
    })
    
    • ZXDateUtils
    ZXDateUtils.currentDateTime(true, timeWithSecond: true)//2017年04月19日 12:01:57
    ZXDateUtils.currentDate(true)//2017年04月19日
    ZXDateUtils.currentTime(true)//12:01:57
    ZXDateUtils.datetimeFromMilliSecond(1492569882000, chineseFormat: true, timeWithSecond: false)//2017年04月19日 10:44
    ZXDateUtils.dateFromMilliSecond(1492569882000, chineseFormat: false)//2017-04-19
    ZXDateUtils.timeFromMilliSecond(1492569882000, timeWithSecond: true)//10:44:42
    ZXDateUtils.milliSecondFromDate("2017/4/19 10:54:48",dateFormat: "YYYY/MM/dd HH:mm:ss")//1492570488000
    ZXDateUtils.milliSecondFromDate("2017年4月19日 10:54:48",dateFormat: "YYYY年MM月dd日 HH:mm:ss")//1492570488000
    ZXDateUtils.currentMillisecond()//1492574517213
    ZXDateUtils.intToTime(123456,componentString: nil)//10°17′36″
    
    • 更新中...

    Network(Tiny)

    • Async Request(JSON/String GET/POST)
    ZXNetwork.asyncRequest(withUrl: "https://itunes.apple.com/search", params: ["term":"qq","limit":"1","entity":"software"], method: .get, completion: { (obj, stringValue) in
        print("\(obj ?? "")")
    }, timeOut: { (errorMsg) in
        print("TimeOut:\(errorMsg)")
    }) { (code, errorMsg) in
        print("HttpError:\(code) \(errorMsg)")
    }
    
    • Upload Image
    ZXNetwork.uploadImage(to: "https://192.168.0.81:8000/upload", images: [UIImage(named:"r1")!,UIImage(named:"r2")!], params: nil, compressRatio: 1, completion: { (obj, string) in
        print("\(obj ?? "")")
    }, timeOut: { (errorMsg) in
        print("TimeOut:\(errorMsg)")
    }) { (code, errorMsg) in
        print("HttpError:\(code) \(errorMsg)")
    }
    

    RemoteNotification

    • 更新中...

    Router

    • 更新中...

    Web

    • 更新中...

    CommonUI

    • 更新中...

    • 基础版本使用见Demo
    • 方便自己、方便个别人
    Visit original content creator repository https://github.com/kk-vv/ZXStructs-Swift
  • video

    Lifecycle: experimental Codecov test coverage R-CMD-check

    {video} – Interactive Video Player

    {video} is a package that utilises the video.js library to play video on the modern web.

    Installation

    This package is not yet available on CRAN. To install the latest version:

    install.packages("devtools")
    devtools::install_github("ashbaldry/video")

    Usage

    The HTML way to include an audio file in any shiny application/web page is to use the <audio> tag. This cannot (easily) be manipulated from the server.

    tags$video(src = "https://vjs.zencdn.net/v/oceans.mp4", type = "video/mp4", controls = NA)

    video.js is a flexible video player that is more robust than the basic HTML5 video player, and can easily be manipulated from the server side of shiny applications.

    library(shiny)
    library(video)
    
    ui <- fluidPage(
      title = "video Example",
      h1("Video Example"),
      video(
        "https://vjs.zencdn.net/v/oceans.mp4",
        elementId = "video"
      ),
      tags$p(
        "Currently playing:",
        textOutput("video_playing", container = tags$strong, inline = TRUE)
      )
    )
    
    server <- function(input, output, session) {
      output$video_playing <- renderText({
        if (isTRUE(input$video_playing)) "Yes" else "No"
      })
    
      observe({
        req(input$video_seek)
        if (round(input$video_seek) == 10) {
          pauseVideo("video")
        } else if (round(input$video_seek) == 20) {
          stopVideo("video")
        }
      })
    }
    
    shinyApp(ui, server)

    video.js.shiny.Example.mp4

    Whilst the buttons below the video aren’t required for playing/pausing the video, they are linked to observeEvents that send messages from the server to the video to update.

    Extending video.js

    For those who want more from video.js and isn’t currently available within {video}, then the API is very flexible (https://docs.videojs.com/), and any video can be retrieved in JavaScript using const player = videojs("id") and manipulated from there.

    Examples

    All examples are available in the Examples directory and can be run locally by installing the {video} package:

    Visit original content creator repository https://github.com/ashbaldry/video
  • HeliPilot

    ArduPilot Project

    Gitter

    Build Travis Build SemaphoreCI Build Status

    Coverity Scan Build Status

    Autotest Status

    The ArduPilot project is made up of:

    User Support & Discussion Forums

    Developer Information

    Top Contributors

    How To Get Involved

    License

    The ArduPilot project is licensed under the GNU General Public License, version 3.

    Maintainers

    Ardupilot is comprised of several parts, vehicles and boards. The list below contains the people that regularly contribute to the project and are responsible for reviewing patches on their specific area. See also the list of developers with merge rights.

    Visit original content creator repository https://github.com/MidwestAire/HeliPilot
  • HeliPilot

    ArduPilot Project

    Gitter

    Build Travis Build SemaphoreCI Build Status

    Coverity Scan Build Status

    Autotest Status

    The ArduPilot project is made up of:

    User Support & Discussion Forums

    Developer Information

    Top Contributors

    How To Get Involved

    License

    The ArduPilot project is licensed under the GNU General Public License, version 3.

    Maintainers

    Ardupilot is comprised of several parts, vehicles and boards. The list below contains the people that regularly contribute to the project and are responsible for reviewing patches on their specific area. See also the list of developers with merge rights.

    Visit original content creator repository https://github.com/MidwestAire/HeliPilot
  • Phantasma Godot Integration

    Phantasma Godot Integration

    This repository provides an integration of the Phantasma Blockchain into the Godot Engine, allowing Godot developers to harness the power of blockchain for their games and applications.

    Features

    Sample Godot project containing a simple wallet connection using the Phantasma Link library, easy to understand and integrate into existing Godot projects.

    Getting Started

    Prerequisites

    Godot Engine v4.0 or newer Basic knowledge of C# in Godot A Phantasma wallet (for testing, eg: Poltergeist or Ecto)

    Installation

    Clone the repository:

        git clone https://github.com/YourGitHubUsername/phantasma-godot-integration.git

    Open the included Godot project in the Godot Engine.

    Follow the sample code and documentation provided in the sample project to understand the integration process.

    Usage

    The sample Godot project provides a straightforward example of how to:

    • Connect to a Phantasma wallet.
    • Display wallet details.
    • Interact with the Phantasma blockchain via the Phantasma Link library.

    For more detailed instructions and explanations, please refer to the in-code documentation and comments provided within the sample project. Contributing

    We welcome contributions to improve this integration! If you’d like to contribute, please fork the repository and create a pull request. Make sure to check the CONTRIBUTING.md for best practices and guidelines.

    License

    This project is licensed under the MIT License – see the LICENSE file for details.

    Acknowledgments

    Phantasma Team for their comprehensive Link library. Godot community for the continuous support and contributions.

    Support & Contact

    If you run into issues or have questions, feel free to open an issue or join the Phantasma Telegram.

    Happy developing, and enjoy the power of Phantasma in the Godot Engine!

    Visit original content creator repository https://github.com/phantasma-io-archive/Phantasma-Godot