Blog

  • php-jsondb

    php-jsondb

    A PHP Class that reads JSON file as a database. Use for sample DBs.

    Usage

    Install package composer require jajo/jsondb

    Initialize

    <?php 
    use Jajo\JSONDB;
    $json_db = new JSONDB( __DIR__ ); // Or passing the directory of your json files with no trailing slash, default is the current directory. E.g.  new JSONDB( '/var/www/html/json_files' )

    Inserting

    Insert into your new JSON file. Using users.json as example here

    NB: Columns inserted first will be the only allowed column on other inserts

    <?php
    $json_db->insert( 'users.json', 
    	[ 
    		'name' => 'Thomas', 
    		'state' => 'Nigeria', 
    		'age' => 22 
    	]
    );

    Get

    Get back data, just like MySQL in PHP

    All columns:

    <?php
    $users = $json_db->select( '*' )
    	->from( 'users.json' )
    	->get();
    print_r( $users );
    Custom Columns:

    <?php 
    $users = $json_db->select( 'name, state'  )
    	->from( 'users.json' )
    	->get();
    print_r( $users );
    	
    Where Statement:

    This WHERE works as AND Operator at the moment or OR

    <?php 
    $users = $json_db->select( 'name, state'  )
    	->from( 'users.json' )
    	->where( [ 'name' => 'Thomas' ] )
    	->get();
    print_r( $users );
    	
    // Defaults to Thomas OR Nigeria 
    $users = $json_db->select( 'name, state'  )
    	->from( 'users.json' )
    	->where( [ 'name' => 'Thomas', 'state' => 'Nigeria' ] )
    	->get();
    print_r( $users );  
    	
    // Now is THOMAS AND Nigeria 
    $users = $json_db->select( 'name, state'  )
    	->from( 'users.json' )
    	->where( [ 'name' => 'Thomas', 'state' => 'Nigeria' ], 'AND' )
    	->get();
    print_r( $users );  	
    	
    	
    Where Statement with regex:

    By passingJSONDB::regex to where statement, you can apply regex searching. It can be used for implementing LIKE or REGEXP_LIKE clause in SQL.

    $users = $json_db->select( 'name, state' )
    	->from( "users" )
    	->where( array( "state" => JSONDB::regex( "/ria/" )), JSONDB::AND )
    	->get();
    print_r( $users );
    // Outputs are rows which contains "ria" string in "state" column. 
    Order By:

    Thanks to Tarun Shanker for this feature. By passing the order_by() method, the result is sorted with 2 arguments of the column name and sort method – JSONDB::ASC and JSONDB::DESC

    <?php 
    $users = $json_db->select( 'name, state'  )
    	->from( 'users.json' )
    	->where( [ 'name' => 'Thomas' ] )
    	->order_by( 'age', JSONDB::ASC )
    	->get();
    print_r( $users );

    Updating Row

    You can also update same JSON file with these methods

    <?php 
    $json_db->update( [ 'name' => 'Oji', 'age' => 10 ] )
    	->from( 'users.json' )
    	->where( [ 'name' => 'Thomas' ] )
    	->trigger();
    	

    Without the where() method, it will update all rows

    Deleting Row

    <?php
    $json_db->delete()
    	->from( 'users.json' )
    	->where( [ 'name' => 'Thomas' ] )
    	->trigger();

    Without the where() method, it will deletes all rows

    Exporting to MySQL

    You can export the JSON back to SQL file by using this method and providing an output

    <?php 
    $json_db->to_mysql( 'users.json', 'users.sql' );

    Disable CREATE TABLE

    <?php 
    $json_db->to_mysql( 'users.json', 'users.sql', false );

    Exporting to XML

    Tarun Shanker also provided a feature to export data to an XML file

    <?php 
    if( $json_db->to_xml( 'users.json', 'users.xml' ) ) {
    	echo 'Saved!';
    }

    Visit original content creator repository

  • KapitanPOP

    KapitanPOP

    Refactor of BSC’s performance analysis tools to generate fundamental POP metrics.

    Description

    KapitanPOP is an effort to refactor and enchance BSC’s performance analysis tools (Paraver, Dimemas, BasicAnalysis) in order to leverage present technologies. With the usage of nowaday’s libraries for fast I/O, data analysis and distributed programming KapitanPOP offers the same funcionalities as BSC’s Basic Analysis framework, but in a scalable way with better performances and less memory consumption.

    KaptainPOP has 2 ultimate goals: to ease the work of HPC application performance analysts within the POP project, enabling faster efficiency metrics processing, and the proof-of-concept of promising libraries and technologies for the sake of trace analysis, which in the end is nothing else than pure data analysis.

    Installation

    You can install KapitanPOP in two ways: using Singularity containers and downloading KapitanPOP’s image from the cloud; or manual installation. I strongly recommend using KapitanPOP through Singularity containers because installing manually dependencies can be frustrating.

    Singularity

    After installing Singularity containers on your computer you only have to pull KapitanPOP’s image from its Singularity Hub repository and run it.

    $singularity pull shub://TovRudyy/KapitanPOP
    $singularity shell KapitanPOP_latest.sif
    

    You can also build the image directly on your computer using the available Singularity recipe.

    $sudo singularity build KapitanPOP.sif Singularity
    

    Manual installation

    First of all, you will have to install the next dependencies:

    • HDF5 library 1.12.0 release – C/Fortran library to store data on disk in HDF5 file format. Release 1.10.0 also should work
    • Pandas – Python module for data analysis
    • Numpy – Package for scientific computing in Python
    • Dask the version from GitHub’s master branch – Parallel computing library for Python. Very important to install it from the master branch
    • H5py – HDF5 for Python
    • PyTables – Package used by Dask to manage datasets from HDF5 files
    • Dimemas (optional) – Network simulator tool for message-passing programs

    You should be able to install Pandas, Numpy, h5py and PyTables on your computer very easily using pip install, for example:

    $pip install pandas numpy h5py tables
    

    or easier:

    $pip install -r requirements.txt
    

    To install HDF5-1.12.0 the best approach is to download the sources and install them on your computer.

    For Dask you must install it from source from its master branch on GitHub
    because the official releases have some issues related to HDF5.

    Dimemas is not a key dependency. You can run KapitanPOP without it, but you will miss some performance metrics
    (transfer and serialization efficiency).

    Once dependencies are satisfied, you can install KapitanPOP. You might need to modify the Makefile if your HDF5 libraries are not installed in a standard system path.

    $git clone https://github.com/TovRudyy/KapitanPOP.git
    $cd KapitanPOP
    $mkdir -p obj bin
    $make
    

    IMPORTANT: KapitanPOP does not work with gcc10.

    Current Functionalities

    Parsing of Paraver trace files into HDF5 file format

    KapitanPOP can read data from .row and .prv files and write it into an equivalent HDF5 file. For this, a little C program to parse .prv files is used. The resulting HDF5 file contains all the data of the original Paraver file, but in a tabular format divided in dataframes of States, Events and Communications. The resulting HDF5 contains an equivalent version of the .row and .prv data in a more convenient way for fast I/O.

    Metrics computation of the multiplicative POP model

    KapitanPOP can generate an identic CSV file as the original modelfactors with POP metrics of te multiplicative model:

    • Parallel efficiency
    • Load balance
    • Communication efficiency
    • Serialization efficiency
    • Transfer efficiency
    • Computation scalability
    • Global efficiency
    • IPC scalability
    • Frequency scalability
    • Speedup

    And the data where those metrics come from:

    • Average useful IPC
    • Average useful frequency
    • Runtime
    • Runtime (ideal)
    • Useful duration
    • Useful duration (ideal)
    • Useful instructions
    • Useful cycles

    Usage

    A quick help is available running $./kapitanPOP.py --help

    To generate a modelfactors.csv file with POP metrics $./kapitanPOP.py trace1.prv trace2.prv trace3.prv ...

    It also accepts regex expressions. By default it will not simulate the application execution on an ideal network. To enable the compute of ideal performance metrics (transfer & serialization efficiency) you will have to add the --dim option flag $./kapitanPOP.py --dim trace...

    To only parse a Paraver trace (.row and .prv files) into an equivalent HDF5 file $./kapintanPOP.py -p trace.prv

    You can tune how the parser performs in order to limit the maximum memory and disk usage. By default, the parser process the .prv file in chunks of 1GB and does not apply compression to the resuling HDF5. You can can change those setting with --chunk_size and --comp_lvl option flags.

    IMPORTANT: curently the parsed file is between two and three times bigger than the original Paraver file. This is a necessary trade-off for a better memory usage and speeds when analysing the trace. If your computer has limited disk space, you should run KapitanPOP with --comp_lvl 1 (compression level 1). This will notably reduce the size of the file (between 5 and 10 times less), though the parse time will increase in exchange.

    If when using KapitanPOP you get an error like this unable to lock file, errno = 11, error message = 'Resource temporarily unavailable', try exporting the environment variable HDF5_USE_FILE_LOCKING to FALSE before running the too.

    export HDF5_USE_FILE_LOCKING=FALSE
    

    TODO

    • Implement a .pcf parser
    • Implement the additive model
    • Improve execution times and memory usage when analysing traces (kapitanPOP.py)
    • Benchmark KapitanPOP
    • Proof-of-concept running KapitanPOP using multiples nodes through Dask
    • Refactor Dimemas simulator

    Disclaimer

    KapitanPOP is at the moment in a continuous developing & testing phase, therefore you should use it on your own responsibility. You might encounter bugs, wrong results and situations where the software crashes or, in the worst cases, devours your entire memory and never finishes. If you experience any problem with the software, you are welcome to open an issue or send an email to oleksandr.rudyy[at]hlrs.de describing your finding.

    Visit original content creator repository

  • ask-sdk-dynamodb-persistence-adapter

    ASK SDK DynamoDB Persistence Adapter package contains implementation of persistence adapter in Core SDK (‘ask-sdk-core’) based on AWS SDK v3.

    What is ASK SDK v2 for Node.js

    The ASK SDK v2 for Node.js is an open-source Alexa CustomSkill Development Kit. ASK SDK v2 for Node.js makes it easier for you to build highly engaging skills, by allowing you to spend more time on implementing features and less on writing boiler-plate code.

    Installing

    ASK SDK DynamoDB Persistence Adapter package is an addon package for the core SDK (‘ask-sdk-core’) and thus has peer dependency of the core SDK package. From within your NPM project, run the following commands in the terminal to install them:

    npm install --save ask-sdk-core
    
    npm install --save @ojdoc/ask-sdk-dynamodb-persistence-adapter
    

    Usage and Getting Started

    You can find a getting started guide here.

    Usage with TypeScript

    The ASK SDK DynamoDB Persistence Adapter package for Node.js bundles TypeScript definition files for use in TypeScript projects and to support tools that can read .d.ts files. Our goal is to keep these TypeScript definition files updated with each release for any public api.

    Pre-requisites

    Before you can begin using these TypeScript definitions with your project, you need to make sure your project meets a few of these requirements:

    • Use TypeScript v2.x
    • Includes the TypeScript definitions for node. You can use npm to install this by typing the following into a terminal window:
    npm install --save-dev @types/node
    

    In Node.js

    To use the TypeScript definition files within a Node.js project, simply import ask-sdk-dynamodb-persistence-adapter as below:

    In a TypeScript file:

    import * as Adapter from '@ojdoc/ask-sdk-dynamodb-persistence-adapter';

    In a JavaScript file:

    const Adapter = require('@ojdoc/ask-sdk-dynamodb-persistence-adapter');

    Opening Issues

    For bug reports, feature requests and questions, we would like to hear about it. Search the existing issues and try to make sure your problem doesn’t already exist before opening a new issue. It’s helpful if you include the version of the SDK, Node.js or browser environment and OS you’re using. Please include a stack trace and reduced repro case when appropriate, too.

    License

    This SDK is distributed under the Apache License, Version 2.0, see LICENSE for more information.

    Visit original content creator repository

  • AudioGroupChat

    Audio Group Chat

    A real-time audio group chat implementation enabling voice and text communication between humans and AI agents. This project combines WebRTC, speech-to-text, text-to-speech, and LLM capabilities to create interactive conversations with AI agents.

    Features

    • Real-time audio communication using WebRTC
    • Multiple AI agents with distinct voices and personalities
    • Text-to-Speech (TTS) with customizable voice options
    • Speech-to-Text (STT) for human voice input
    • Round-robin speaker selection for balanced conversations
    • Gradio-based web interface for easy interaction
    • Support for both voice and text channels

    Prerequisites

    • Python 3.8+
    • Node.js (for frontend components)
    • Ollama (for local LLM support)

    Installation

    1. Clone the repository:

    git clone <repository-url>
    cd AudioGroupChat
    1. Create and activate a virtual environment:

    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    1. Install dependencies:
    pip install -r requirements.txt

    Configuration

    1. Configure Ollama settings in main_app.py:

    config_list = [{
        "model": "gemma3:1b",  # or other supported models
        "base_url": "http://localhost:11434/v1",
        "price": [0.00, 0.00],
    }]
    1. (Optional) Set up Twilio TURN server credentials for improved WebRTC connectivity:

    export TWILIO_ACCOUNT_SID=your_account_sid
    export TWILIO_AUTH_TOKEN=your_auth_token

    Usage

    1. Start the application:
    python main_app.py
    1. Open the provided Gradio interface URL in your browser (typically http://localhost:7860)

    2. Start a conversation by:

      • Speaking into your microphone
      • Typing text messages
      • Using the provided UI controls

    Project Structure

    • main_app.py: Main application entry point
    • audio_groupchat.py: Core audio group chat implementation
    • gradio_ui.py: Gradio web interface components
    • test_group_chat.py: Test cases and examples

    Voice Configuration

    The system supports multiple voice options for AI agents:

    • Energetic (fast, US English)
    • Calm (slower, US English)
    • British (UK English)
    • Authoritative (moderate speed, US English)
    • Default (standard US English)

    API Documentation

    AudioGroupChat Class

    class AudioGroupChat(GroupChat):
        def __init__(self, agents=None, messages=None, max_round=10,
                     speaker_selection_method="round_robin",
                     allow_repeat_speaker=False)

    Key methods:

    • initialize(): Set up audio processing components
    • add_human_participant(user_id): Add a human participant
    • start_audio_session(user_id): Start an audio session

    GradioUI Class

    class GradioUI:
        def __init__(self, audio_chat: AudioGroupChat)
        def create_interface(self) -> gr.Blocks

    Contributing

    1. Fork the repository
    2. Create a feature branch
    3. Commit your changes
    4. Push to the branch
    5. Create a Pull Request

    License

    This project is licensed under the MIT License – see the LICENSE file for details.

    Acknowledgments

    Visit original content creator repository

  • Han-Viet-Nhat

    Han-Viet-Nhat

    日本語に同形語が存在する漢越語 / Những từ Hán-Việt có từ đồng âm trong tiếng Nhật

    日本語 (Tiếng Việt đi theo tiếng Nyật)

    引用

    野村純太. 2020.『ベトナム語及び日本語における漢語系語彙の品詞性の対照』東京外国語大学卒業論文.

    ライセンス

    クリエイティブ・コモンズ表示4.0国際ライセンス (CC BY 4.0)

    ファイルの形式

    ID [TAB] ベトナム語 [TAB] 漢字表記 [TAB] ベトナム語品詞 [TAB] 対応日本語の有無 [TAB] 日本語読み [TAB] 日本語品詞

    • 品詞が複数ある場合、区切り記号は,
    • 対応日本語がない場合、「日本語読み」と「日本語品詞」は_(アンダースコア)

    ベトナム語品詞

    • V0, V1, V2, V3: 3人中0/1/2/3人の話者が目的語を取れると判断した動詞(自他の区別)
    • N: 名詞
    • A: 形容詞
    • Adv: 副詞

    日本語品詞

    • 自動: 自動詞(の語幹)
    • 他動: 他動詞(の語幹)
    • 自他動: 自他両用動詞(の語幹)
    • : 名詞
    • 形動: 形容動詞(の語幹)
    • : 副詞

    Tiếng Việt

    Trích dẫn

    Nomura, Junta. 2020. Betonamugo oyobi Nihongo niokeru Kangokei Goi no Hinshisei no Taisho [Đối chiếu từ loại của từ vựng gốc Hán trong tiếng Việt và tiếng Nhật]. Luận văn tốt nghiệp, Đại học Ngoại ngữ Tokyo.

    Giấy phép

    Giấy phép Creative Commons Ghi công 4.0 Quốc tế (CC BY 4.0)

    Định dạng tệp

    ID [TAB] Tiếng Việt [TAB] Chữ Hán [TAB] Từ loại trong tiếng Việt [TAB] Có từ vựng tương đương trong tiếng Nhật [TAB] Cách đọc trong tiếng Nhật [TAB] Từ loại trong tiếng Nhật

    • Trong trường hợp từ vựng có hai từ loại trở lên, dấu ngắt là ,
    • Trong trường hợp từ vựng đó không được sử dụng trong tiếng Nhật, _(gạch dưới)sẽ được hiện trong “Cách đọc trong tiếng Nhật” và “Từ loại trong tiếng Nhật”

    Từ loại trong tiếng Việt

    • V0, V1, V2, V3: 0/1/2/3 người trong 3 người bản ngữ đã khẳng định rằng động từ đó có thể có tân ngữ(phân biệt nội động từ và ngoại động từ)
    • N: danh từ
    • A: tính từ
    • Adv: trạng từ

    Từ loại trong tiếng Nhật

    • 自動: (gốc từ của) tự động từ
    • 他動: (gốc từ của) tha động từ
    • 自他動: (gốc từ của) động từ lưỡng tính
    • : danh từ
    • 形動: (gốc từ của) tính từ đuôi na
    • : trạng từ

    Visit original content creator repository

  • fubao

    푸바오: 푸른 바다로 오세요

    Alt text

    프로젝트 소개

    개발 환경

    • Management Tool

    • IDE

    • Infra

    • Frontend
      React

    • Backend

    • AI

    서비스 화면

    • 추천 컴포넌트 및 일정 등록
      X-ezgif.com-resize

    • 낚시 포인트 및 날씨 확인
      ezgif.com-resize_point

    • 일정 등록 및 AI 카메라 작동
      ezgif.com-resize_register_plan

    • AI 어종 판별 및 길이 측정
      ezgif.com-resize

    • 도감 등록
      ezgif.com-resize_collection

    • 어항
      ezgif.com-video-to-gif-converter_fish_bowl

    주요 기능

    1. 맞춤 정보 제공
      일정 등록 전, 일정 등록 후, 일정 당일에 따른 바다 낚시 입문자 맞춤 정보를 제공해준다.

    2. AI 이미지 인식
      AI 카메라를 통해 물고기 사진을 찍으면 AI를 통해 해당 어류의 어종을 판별해준다.
      또한 물고기 옆에 신용 카드 및 담뱃갑 등의 비교 물체를 놓고 사진을 찍으면 어류의 길이 측정도 동시에 해준다.

    3. 나만의 도감
      AI 카메라를 통해 판별된 어종이 도감에 자동으로 등록된다.
      도감에서 각 어종의 잡은 횟수 및 최대 길이를 확인할 수 있다.

    4. 어항
      도감에 어종이 등록되면 해당 어종의 물고기 에셋이 어항 속에 추가되고,
      어항 속을 헤엄치는 애니메이션을 볼 수 있다.

    기술 소개

    1. 어종 판별
      11,854개의 구글 크롤링 이미지 데이터셋을 MobileNet 구조의 CNN 모델로
      학습시켜 인공지능 모델 파일 생성

    2. 길이 측정

    • 어류 객체 탐지
      Object Detection API 사용
    • 카드 객체 탐지
      OpenCV Canny 사용
    1. 추천 알고리즘
      가중치 랜덤 알고리즘을 통한 사용자 설문 및 리뷰 기반 추천

    2. 홈 IoT
      Three.js를 사용해 물고기 에셋을 추가하고 블랜더 툴을 이용해 애니메이션 수정
      또한 라즈베리파이와 와이파이 모듈(ESP8266)간에 MQTT를 사용해 통신하여 조명 제어

    설계 문서

    • ERD
      image

    • Architecture
      image

    • CI/CD
      image

    팀 소개

    Alt text

    • 고광현: FE / EMBEDDED
    • 김민영: BE / INFRA
    • 김지용: 팀장 / FE
    • 박미성: AI
    • 오유진: BE / FE / PT

    카테고리

    Application Domain Language Framework
    🔲 Desktop Web ✅ AI ✅ JavaScript 🔲 Vue.js
    🔲 Mobile Web 🔲 Big Data ✅ TypeScript ✅ React
    🔲 Responsive Web 🔲 Blockchain 🔲 C/C++ 🔲 Angular
    ✅ Android App ✅ IoT 🔲 C# ✅ Node.js
    🔲 iOS App 🔲 AR/VR/Metaverse ✅ Python ✅ Flask/Django
    ✅ Desktop App 🔲 Game 🔲 Java 🔲 Spring/Springboot
    🔲 Kotlin

    Git Commit Convention

    Commit Message 구조

    커밋 메시지는 제목/본문/꼬리말로 구성

    type: Subject // 제목 body(옵션) // 본문 footer(옵션) // 꼬리말

    type : 어떤 의도로 커밋했는지를 type 에 명시(ex. feat, fix, docs)

    Subject : 제목. 코드 변경사항에 대한 짧은 요약

    body : 긴 설명이 필요한 경우, 어떻게 작성했는지가 아닌 무엇을 왜 했는지 를 작성(부연설명 혹은 커밋이유)

    footer : issue tracker ID 를 명시하고 싶은 경우에 작성

    타입(Commit Type)

    • 태그(tag) + 제목(subject) 형식
    • 첫 문자는 대문자
    • “태그: 제목” 의 형태이며, “:” 뒤에 space 가 있음에 유의[ex) Feat: buy album api (Feat 가 태그이고, buy album api 가 제목)]

    태그 종류

    • Add : 코드나 테스트, 예제, 문서등의 추가 생성이 있는경우
    • Implement : 코드가 추가된 정도보다 더 주목할만한 구현체를 완성시켰을 때
    • Design : CSS 등 사용자가 UI 디자인을 변경했을 때
    • Temp : 작업 중에 사정상 임시로 저장할 경우
    • Docs : 문서를 수정한 경우
    • Fix : 버그를 고친경우
    • Style : 코드 포맷 변경, 세미콜론 누락, 코드 수정이 없는경우
    • Refactor : 코드 리펙토링
    • Test : 테스트 코드. 리펙토링 테스트 코드를 추가했을 때
    • Rename : 파일명(or 폴더명) 을 수정한 경우
    • Remove : 코드(파일) 의 삭제가 있을 때. “Clean”, “Eliminate” 를 사용하기도 함
    • Chore : 빌드 업무 수정, 패키지 매니저 수정
    • Improve : 향상이 있는 경우. 호환성, 검증 기능, 접근성 등이 될수 있습니다.
    • Move : 코드의 이동이 있는경우
    • Feat : 새로운 기능을 추가하는 경우
    • Updated : 계정이나 버전 업데이트가 있을 때 사용. 주로 코드보다는 문서나, 리소스, 라이브러리등에 사용합니다.
    • Comment : 필요한 주석 추가 및 변경

    제목

    • 제목은 최대 50글자가 넘지 않고, 마침표 및 특수기호는 사용 금지
    • 동사(원형)를 가장 앞에 두고 첫 글자는 대문자로 표기
    • 제목은 개조식 구문으로 작성 –> 완전한 서술형 문장이 아니라, 간결하고 요점적인 서술을 의미.
    • Fixed –> Fix
    • Added –> Add
    • Modified –> Modify

    본문(Body)

    • 본문은 한 줄 당 72자 내로 작성
    • 본문 내용은 양에 구애받지 않고 최대한 상세히 작성
    • 어떻게 변경했는지 보다 무엇을 변경했는지 또는 왜 변경했는지를 설명

    꼬릿말(Footer)

    • 꼬리말은 선택사항, 이슈 트래커 ID 작성
    • “유형: #이슈 번호” 형식으로 사용
    • 여러 개의 이슈 번호를 적을 때는 쉼표(,)로 구분
    • 이슈 트래커 유형은 다음 중 하나를 사용
    1. Fixes: 이슈 수정중 (아직 해결되지 않은 경우)
    2. Resolves: 이슈를 해결했을 때 사용
    3. Ref: 참고할 이슈가 있을 때 사용
    4. Related to: 해당 커밋에 관련된 이슈번호 (아직 해결되지 않은 경우) ex) Fixes: #45 Related to: #34, #23
    Visit original content creator repository
  • symphart

    symphart

    An introduction to Symfony 4.

    Preview

    Quick Start

    Requirements: Composer & MySQL service running.

    # Install dependencies
    composer install
    
    # Create and edit the ".env.local" file and add DB params
    
    # Run in development mode
    php bin/console server:run

    Baby steps for development:

    Install Composer (globally) Composer download and install globally Composer moved to path Check with command: composer -V.

    Then navigate to directory “/Applications/XAMPP/xamppfiles/htdocs/sites” (or your public web server directory) and create a new Symfony proyect with this command: composer create-project symfony/skeleton symphart.

    NOTE: I created this project inside a folder called “sites” but you can skip that folder if you prefer create a virtual host. Another option is use “Symfony PHP web server” as described after.

    Start your web server (Apache) and load in your browser this URL: http://localhost/sites/symphart/public/

    TIP: You can create a basic .htaccess file inside “/public” folder.

    Install “Symfony PHP web server” using command: composer require symfony/web-server-bundle --dev.

    Creater your first controller:

    // src/Controller/ExampleController.php
    
    <?php
    	namespace App\Controller;
    
    	use Symfony\Component\HttpFoundation\Response;
    
    	class ExampleController {
    		public function index() {
    			return new Response('<html><body><h1>It's just an example</h1></body></html>');
    		}
    	}
    

    And define your first route:

    // config/routes.yaml
    
    index:
       path: /
       controller: App\Controller\ExampleController::index
    

    For start server in development mode: php bin/console server:run. Observe how a web server start serving your applicattion on localhost on an specific port.

    Et Voilà! You are ready to work.

    Optional development steps:

    • Install Annotations to define routes inside controllers: composer require annotations
    • Install Twig as template engine: composer require twig
    • Install Doctrine: composer require doctrine maker.
      • If you want to use a database, you must create a copy of file .env and rename as .env.local and then define your SQL credentials and database name.
      • Then execute php bin/console doctrine:database:create.
      • Now you can create entities with the command php bin/console make:entity Foo (where “Foo” will be the name of entity). You can use the same command to update an existing entity
      • Execute php bin/console doctrine:migrations:diff to create migration file. Database migrations are a way to safely update your database schema both locally and on production. Then execute the migration with the command php bin/console doctrine:migrations:migrate. Now your database will be upgrade to new structure.
      • If you edit Entity files, you need to run php bin/console doctrine:migrations:diff and php bin/console doctrine:migrations:migrate commands to sync database.
    • Install Assets manager: composer require symfony/asset
    • Install form component: composer require form

    Tips for development:

    • Examinate your routes: php bin/console debug:router
    • Execute queries to database from console: php bin/console doctrine:query:sql 'SELECT * from article'

    Resources:

    Visit original content creator repository
  • ccnn

    CCNN: Constrained Convolutional Neural Networks for Weakly Supervised Segmentation

    Deepak Pathak, Philipp Krähenbühl, Trevor Darrell

    CCNN is a framework for optimizing convolutional neural networks with linear constraints.

    • It has been shown to achieve state-of-the-art results on the task of weakly-supervised semantic segmentation.
    • It is written in Python and C++, and based on Caffe.
    • It has been published at ICCV 2015. It was initially described in the arXiv report.

    If you find CCNN useful in your research, please cite:

    @inproceedings{pathakICCV15ccnn,
        Author = {Pathak, Deepak and Kr\"ahenb\"uhl, Philipp and Darrell, Trevor},
        Title = {Constrained Convolutional Neural Networks for Weakly Supervised Segmentation},
        Booktitle = {International Conference on Computer Vision ({ICCV})},
        Year = {2015}
    }
    

    License

    CCNN is released under academic, non-commercial UC Berkeley license (see LICENSE file for details).

    Contents

    1. Requirements
    2. Installation
    3. Usage
    4. Scripts Information
    5. Extra Downloads

    1) Requirements

    1. Requirements for Caffe and pycaffe (see: Caffe installation instructions)
    2. GCC version more than 4.7
    3. Boost version more than 1.53 (recommended). If system dependencies give issues, install anaconda dependencies:

    $ conda install boost
    $ conda install protobuf
    
    1. A good GPU (e.g., Titan, K20, K40, …) with at least 3G of memory is sufficient.

    2) Installation

    1. Clone the CCNN repository

    # Make sure to clone with --recursive
    git clone --recursive https://github.com/pathak22/ccnn.git
    1. Build Caffe and pycaffe
    • Now follow the Caffe installation instructions here
    • Caffe must be built with support for Python layers!
    • In your Makefile.config, make sure to have this line uncommented
      WITH_PYTHON_LAYER := 1
    • You can download my Makefile.config for reference.

    cd ccnn/caffe-ccnn
    # If you have all caffe requirements installed
    # and your Makefile.config in place, then simply do:
    make -j8 && make pycaffe
    1. Now build CCNN

      cd ccnn
      mkdir build
      cd build
      cmake ..
      make -j8
    • Note: If anaconda is installed, then python paths may have been messed b/w anaconda and system python.
    • I usually run this command :
    cmake .. -DBOOST_ROOT=/home/pathak/anaconda -DPYTHON_LIBRARY=/home/pathak/anaconda/lib/libpython2.7.so -DPYTHON_INCLUDE_DIR=/home/pathak/anaconda/include/python2.7/ -DCMAKE_C_COMPILER=gcc-4.8 -DCMAKE_CXX_COMPILER=g++-4.8
    • To verify this do : ccmake ./ inside the build folder and manually check the following things :
      MAKE_CXX_COMPILER, CMAKE_C_COMPILER , PYTHON_EXECUTABLE , PYTHON_INCLUDE_DIR , PYTHON_LIBRARY
    • Make sure that cmake doesn’t mess the anaconda boost to system boost.
    1. Configure path (if needed) in src/user_config.py.

    2. (Optional — I don’t do it) If everything runs fine, set CMAKE_BUILD_TYPE using ccmake . to Release. This prevents eigen from checking all assertions etc. and works faster.

    3) Usage

    Demo CCNN.

    cd ccnn
    bash ./models/scripts/download_ccnn_models.sh
    # This will populate the `ccnn/models/` folder with trained models.
    python ./src/demo.py

    Train CCNN.

    cd ccnn
    bash ./models/scripts/download_pretrained_models.sh
    # This will populate the `ccnn/models/` folder with imagenet pre-trained models.
    python ./src/train.py 2> log.txt

    Test CCNN.

    cd ccnn
    python ./src/test.py  # To test IOU with CRF post-processing
    python ./src/test_argmax.py  # To test IOU without CRF

    4) Scripts Information

    Model Prototxts:

    • models/fcn_8s/ : Atrous algorithm based 8-strided VGG, described here.
    • models/fcn_32s/ : 32-strided VGG

    Configure:

    • src/config.py : Set glog-minlevel accordingly to get desired caffe output to terminal

    Helper Scripts:

    • src/extras/ : These scripts are not needed to run the code. They are simple helper scripts to create data, to prepare pascal test server file, to add pascal cmap to segmentation outputs etc.

    5) Extra Downloads

    Visit original content creator repository

  • Bibliography

      Book                   Author                    Tags                Read
      ---------------------- ------------------------- ------------------- ---------
    - Programming as Theory  Peter Naur                programming         -
      Building
    - Being and Time         Heidegger                                     -
    - Natural Categories     Eleanor Rosch             psychology research -
    - The Annotated Turing   Charles Petzold                               -
    - The Art of Computer    Donald E. Knuth                               -
      Programming I
    - The Art of Computer    Donald E. Knuth                               -
      Programming II
    - Linkers & Loaders      Morgan Kaufmann                               -
    - Principia Mathematica  Bertrand Russel, Alfred                       -
      1.                     Whitehead
    - 97 Things Every        Kevlin Henney                                 Completed
      Programmer Should Know
    - Peopleware: Productive Tom DeMarco;Timothy       software projects   Completed
      Projects and Teams     Lister
    - The Pragmatic          Andrew Hunt; David Thomas programming         Completed
      Programmer: From
      Journeyman to Master
    
    
    
    From the jargon file
    ====================
    
    History and "history"
    ---------------------
    
    [Levy] Hackers. Steven Levy. Anchor/Doubleday. Copyright © 1984. ISBN 0-385-19195-2. 
    [Kelly-Bootle] The Computer Contradictionary. Stan Kelly-Bootle. MIT Press. Copyright © 1995. ISBN 0-262-61112-0. 
    [Jennings] The Devouring Fungus: Tales from the Computer Age. Karla Jennings. Norton. Copyright © 1990. ISBN 0-393-30732-8. 
    [Kidder] The Soul of a New Machine. Tracy Kidder. Avon. Copyright © 1982. ISBN 0-380-59931-7. 
    [Markoff-ampersand-Hafner] Cyberpunk: Outlaws and Hackers on the Computer Frontier. Katie Hafner. John Markoff. Simon & Schuster. Copyright © 1991. ISBN 0-671-68322-5. 
    [Stoll] The Cuckoo's Egg. Clifford Stoll. Doubleday. Copyright © 1989. ISBN 0-385-24946-2. 
    Paul Dickson's “Words” (Dell, 1982, ISBN 0-440-52260-7)
     1930 Sellar & Yeatman "1066 And All That"
    
    Fiction
    -------
    
    [Vinge] True Names ... and Other Dangers. Vernor Vinge. Baen Books. Copyright © 1987. ISBN 0-671-65363-6. 
    
    Unix
    ----
    
    [Libes] Life with UNIX: a Guide for Everyone. Don Libes. Sandy Ressler. Prentice-Hall. Copyright © 1989. ISBN 0-13-536657-7. 
    
    BSD
    ---
    
    "the daemon book"
    
      The Design and Implementation of the 4.3BSD UNIX Operating System, by Samuel J. Leffler, Marshall Kirk McKusick, Michael J. Karels, and John S. Quarterman (Addison-Wesley Publishers, 1989, ISBN 0-201-06196-1)
    
      The Design and Implementation of the 4.4 BSD Operating System by Marshall Kirk McKusick, Keith Bostic, Michael J. Karels and John S. Quarterman (Addison-Wesley Longman, 1996, ISBN 0-201-54979-4) 
      
    Perl
    ----
    
    "the camel book"
    
    Programming Perl, by Larry Wall and Randal L. Schwartz, O'Reilly and Associates 1991, ISBN 0-937175-64-1 (second edition 1996, ISBN 1-56592-149-6; third edition 2000, 0-596-00027-8, adding as authors Tom Christiansen and Jon Orwant but dropping Randal Schwartz).
    
    
    
    Programming, Comp Sci
    -----------
    
    Jon Bentley, "More Programming Pearls" (not in bibliography)
    
    "the wizard book"
    Structure and Interpretation of Computer Programs (Hal Abelson, Jerry Sussman and Julie Sussman; MIT Press, 1984, 1996; ISBN 0-262-01153-0)
    Available: https://mitpress.mit.edu/sites/default/files/sicp/index.html
    
    "the dragon book"
    Compilers: Principles, Techniques and Tools, by Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman (Addison-Wesley 1986; ISBN 0-201-10088-6)
    
    "the aluminum book"
    [MIT] Common LISP: The Language, by Guy L. Steele Jr. (Digital Press, first edition 1984, second edition 1990).
    
    "the Cinderella book"
    [CMU] Introduction to Automata Theory, Languages, and Computation, by John Hopcroft and Jeffrey Ullman, (Addison-Wesley, 1979).
    
    paper on "the wheel of reincarnation"
    T.H. Myer and I.E. Sutherland "On the Design of Display Processors", Comm. ACM, Vol. 11, no. 6, June 1968
    

    Visit original content creator repository