Author: tl8088r7lolo

  • openapi-sdk-php

    English | 简体中文

    Alibaba Cloud SDK for PHP

    Latest Stable Version composer.lock Total Downloads License codecov Travis Build Status Appveyor Build Status

    AlibabaCloud

    The Alibaba Cloud V1.0 SDK will soon enter the Basic Security Maintenance phase and is no longer recommended for use. It is suggested to use the V2.0 SDK instead.

    Release Notes

    We developed a new kernel on the principle of eliminating known issues and compatible with old grammar, adding the following features:

    Prerequisites

    Your system will need to meet the Prerequisites, including having PHP >= 5.5. We highly recommend having it compiled with the cURL extension and cURL 7.16.2+.

    Installation

    If Composer is already installed globally on your system, run the following in the base directory of your project to install Alibaba Cloud SDK for PHP as a dependency:

    composer require alibabacloud/sdk

    Please see the Installation for more detailed information about installing through Composer and other ways.

    Troubleshoot

    Troubleshoot Provide OpenAPI diagnosis service to help developers locate quickly and provide solutions for developers through RequestID or error message.

    Online Demo

    Alibaba Cloud OpenAPI Developer Portal provides the ability to call the cloud product OpenAPI online, and dynamically generate SDK Example code and quick retrieval interface, which can significantly reduce the difficulty of using the cloud API.

    Quick Examples

    Before you begin, you need to sign up for an Alibaba Cloud account and retrieve your Credentials. Before request, please Understanding the Clients, after request, please Understanding the Result.

    Currently, only some Alibaba Cloud products are supported, Supported Products, For products that are not supported, you can use Alibaba Cloud Client for PHP to initiate custom requests, and you can use Alibaba Cloud OpenAPI Developer Portal to generate Alibaba Cloud Client for PHP code online.

    <?php
    
    use AlibabaCloud\Client\AlibabaCloud;
    use AlibabaCloud\Client\Exception\ClientException;
    use AlibabaCloud\Client\Exception\ServerException;
    use AlibabaCloud\Ecs\Ecs;
    
    // Set up a global client
    AlibabaCloud::accessKeyClient('foo', 'bar')
                ->regionId('cn-hangzhou')
                ->asDefaultClient();
    
    try {
        // Access product APIs
        $request = Ecs::v20140526()->describeRegions();
        
        // Set options/parameters and execute request
        $result = $request->withResourceType('type') // API parameter
                          ->withInstanceChargeType('type') // API parameter
                          ->client('client1') // Specify the client for send
                          ->debug(true) // Enable the debug will output detailed information
                          ->connectTimeout(0.01) // Throw an exception when Connection timeout 
                          ->timeout(0.01) // Throw an exception when timeout 
                          ->request(); // Execution request
    
        // Can also Set by passing in an array
        $options = [
                       'debug'           => true,
                       'connect_timeout' => 0.01,
                       'timeout'         => 0.01,
                       'query'           => [
                           'ResourceType' => 'type',
                           'InstanceChargeType' => 'type',
                       ],
                   ];
        
        // Settings priority
        $result2 = Ecs::v20140526()
                      ->describeRegions($options)
                      ->options([
                                    'query' => [
                                        'Key'      => 'I will overwrite this value in constructor',
                                        'new'      => 'I am new value',
                                    ],
                                ])
                      ->options([
                                    'query' => [
                                        'Key' => 'I will overwrite the previous value',
                                        'bar' => 'I am new value',
                                    ],
                                ])
                      ->debug(false) // Overwrite the true of the former
                      ->request();
        
    } catch (ClientException $exception) {
        echo $exception->getMessage(). PHP_EOL;
    } catch (ServerException $exception) {
        echo $exception->getMessage() . PHP_EOL;
        echo $exception->getErrorCode(). PHP_EOL;
        echo $exception->getRequestId(). PHP_EOL;
        echo $exception->getErrorMessage(). PHP_EOL;
    }

    Issues

    Opening an Issue, Issues not conforming to the guidelines may be closed immediately.

    Changelog

    Detailed changes for each release are documented in the release notes.

    Contribution

    Please make sure to read the Contributing Guide before making a pull request.

    References

    License

    Apache-2.0

    Copyright (c) 2009-present, Alibaba Cloud All rights reserved.

    Visit original content creator repository
  • harvest_qmcpack

    master build status

    harvest_qmcpack

    Python module containing useful routines to inspect and modify qmcpack objects.

    Quick Start

    Install

    Clone the repository and add it to PYTHONPATH. To use examples, add bin to PATH.

    git clone https://github.com/Paul-St-Young/harvest_qmcpack.git ~
    export PYTHONPATH=~/harvest_qmcpack:$PYTHONPATH
    export PATH=~/harvest_qmcpack/bin:$PATH

    Prerequisites can be installed following requirement.txt

    cd ~/harvest_qmcpack; pip install --user -r requirements.txt

    You can also use pip if you do not intend to change the code

    git clone https://github.com/Paul-St-Young/harvest_qmcpack.git ~/harvest_qmcpack
    pip install --user ~/harvest_qmcpack

    To update to the newest version:

    cd ~/harvest_qmcpack
    git pull
    pip install --user --upgrade ~/harvest_qmcpack

    Tests

    Unit tests should work with either nosetest or pytest

    cd ~/harvest_qmcpack; pytest -v .

    Use

    The library functions can be used in a python script

    # extract all scalar data from a run directory 
    #  look for scalar.dat files and collect statistics
    #  hint: run dirrctory does not have to be an actual run
    import os
    from qharv.reel  import scalar_dat, mole
    from qharv.sieve import scalar_df
    """
    *** Strategy adopted in this script:
     1. use "mole" to dig up the locations of all 
      scalar.dat to be analyzed.
     2. use "reel" to reel in all scalar data 
      without prejudice.
     3. use "sieve" to remove equilibration data 
      and perform averages to shrink the database.
    only two human inputs are required: folder, nequil
    """
    
    # folder containing QMCPACK scalar.dat files
    folder = './runs'
    
    # define equilibration length and autocorrelation length
    nequil = 5
    kappa  = 1.0 # None to re-calculate
    #  runs should be designed to have short equilibration and
    # no autocorrelation. kappa can be calculated on-the-fly
    # ,be warned though: kappa calculation is slow. For nequil:
    # unfortunately I have yet to find a fast and RELIABLE
    # algorithm to determine nequil. For custom nequil, use
    # a dictionary in the `for floc in flist` loop.
    
    # generate the list of scalar.dat files to analyze
    flist = mole.files_with_regex('*scalar.dat', folder)
    
    # analyze the list of scalar.dat files
    data  = []
    for floc in flist:
      mydf = scalar_dat.parse(floc)
      mdf  = scalar_df.mean_error_scalar_df(mydf,nequil,kappa=kappa)
      assert len(mdf) == 1 # each scalar.dat should contribute only one entry
      # add metadata to identify runs
      mdf['path'] = os.path.dirname(floc)
      mdf['fdat'] = os.path.basename(floc)
      data.append(mdf)
    df = pd.concat(data).reset_index() # index must be unique for the database to be saved

    The examples in the “bin” folder can be ran in the shell

    $ stalk vmc.in.xml
    $ stab vmc.s000.scalar.dat
    $ slash-and-burn -y -v nscf
    $ rebuild_wf opt.in.xml

    Documentation

    Documentation is available on github pages.
    A local copy can be generated using sphinx (pip install --user sphinx).
    To generate the documentation, first use sphinx-apidoc to convert doc strings to rst documentation:

    cd ~/harvest_qmcpack/doc; sphinx-apidoc -o source ../qharv

    Next, use the generated Makefile to create html documentation:

    cd ~/harvest_qmcpack/doc; make html

    Finally, use your favorite browser to view the documentation:

    cd ~/harvest_qmcpack/doc/build; firefox index.html

    Examples

    Example usage of the qharv library are included in the “harvest_qmcpack/bin” folder.
    Each file in the folder is a Python script that performs a very specific task:

    • stalk: Show crystal structure specified in a QMCPACK input e.g. stalk vmc.in.xml
    • stab: Scalar TABle (stab) analyzer, analyze one column of a scalar table file, e.g. stab vmc.s000.scalar.dat
    • transplant: Backup nexus-generated folders. Allow user to select subfolders to backup. e.g. transplant graphene attic -s opt -s qmc -e will backup the QMC runs in folder “opt” and “qmc” from graphene/results and graphene/runs to attic/graphene/results and attic/graphene/runs. The “scf” and “nscf” folders will not be backed up.
    • slash-and-burn: Remove temporary files generated by Quantum Espresso.
    • rebuild_wf: Rerun QMCPACK on optimized wavefunctions, e.g. rebuild_wf opt.xml

    Description

    This module is intended to speed up on-the-fly setup, run, and analysis of QMCPACK calculations.
    The module should be used as a collection of python equivalents of bash commands.
    This module is NOT intended to be a full-fledged workflow tool.
    Please refer to nexus for complete workflow magnagement.

    Development Guidelines

    sown the seeds, inspect the crop;
    crossbreed to improve, transplant to adapt;
    reel them in, sieve for good, and refine for the best.
    — qharv maxim

    Laws of Programming (fully plagiarized from Asimov)

    1. A program may not produce wrong results or, through inaction, allow a user to produce wrong results.
    2. A program must accept manual overrides given to it by a user, except where such overrides will conflict with the First Law.
    3. A program must be as simple and as readable as possible, as long as doing so does not conflict with the First or the Second Law.

    note: the simplest way to satisfy both the First and the Second Law is to abort at an unknown request.

    Visit original content creator repository

  • self-reasoning-tokens-pytorch

    Self Reasoning Tokens – Pytorch (wip)

    Exploration into the proposed Self Reasoning Tokens by Felipe Bonetto. The blog post seems a bit unfleshed out, but the idea of stop gradients from next token(s) is an interesting one.

    My initial thought was to apply a stop gradient mask on the attention matrix, but then realized that the values of the “reasoning” tokens could not be stop gradiented correctly without memory issues.

    While walking the dog and meditating on this, I came to the realization that one can create independent stop gradient masks for queries, keys, values in either flash attention or a custom attention backwards, and there may be a whole array of possibilities there. If any experiments come back positive from this exploration, will build out a concrete implementation of this.

    Install

    $ pip install self-reasoning-tokens-pytorch

    Usage

    import torch
    from self_reasoning_tokens_pytorch import Transformer
    
    model = Transformer(
        dim = 512,
        depth = 4,
        num_tokens = 256,
        stop_grad_next_tokens_to_reason = True
    )
    
    x = torch.randint(0, 256, (1, 4))
    
    loss = model(
        x,
        num_reason_tokens = 4,                # number of reasoning tokens per time step
        num_steps_future_can_use_reason = 16, # say you wish for reason tokens to be only attended to by tokens 16 time steps into the future
        return_loss = True
    )
    
    loss.backward()
    
    logits = model(x, num_reason_tokens = 4)

    Or use the novel attention with ability to pass specific stop gradient masks for queries, keys, values

    import torch
    from self_reasoning_tokens_pytorch import stop_graddable_attn
    
    q = torch.randn(2, 8, 1024, 64)
    k = torch.randn(2, 8, 1024, 64)
    v = torch.randn(2, 8, 1024, 64)
    
    stop_grad_mask = torch.randint(0, 2, (8, 1024, 1024)).bool()
    
    out = stop_graddable_attn(
        q, k, v, causal = True,
        q_stop_grad_mask = stop_grad_mask,
        k_stop_grad_mask = stop_grad_mask,
        v_stop_grad_mask = stop_grad_mask
    )
    
    out.shape # (2, 8, 1024, 64)

    The mask should look something like

    Todo

    • deviating from blog post, also try optimizing only a subset of attention heads by tokens far into the future

    Citations

    @misc{Bonetto2024,
        author  = {Felipe Bonetto},
        url     = {https://reasoning-tokens.ghost.io/reasoning-tokens/}
    }
    Visit original content creator repository