Changeset e9eaaf4


Ignore:
Timestamp:
Feb 3, 2016, 10:54:24 PM (8 years ago)
Author:
Paul Brossier <piem@piem.org>
Branches:
feature/autosink, feature/cnn, feature/cnn_org, feature/constantq, feature/crepe, feature/crepe_org, feature/pitchshift, feature/pydocstrings, feature/timestretch, fix/ffmpeg5, master, sampler
Children:
00d0275
Parents:
8605361 (diff), 94b16497 (diff)
Note: this is a merge changeset, the changes displayed below correspond to the merge itself.
Use the (diff) links above to see all the changes relative to each parent.
Message:

Merge branch 'develop' into awhitening

Files:
6 edited

Legend:

Unmodified
Added
Removed
  • Makefile

    r8605361 re9eaaf4  
    88        @[ -d wafilb ] || rm -fr waflib
    99        @chmod +x waf && ./waf --help > /dev/null
    10         @mv .waf*/waflib . && rm -fr .waf-*
     10        @mv .waf*/waflib . && rm -fr .waf*
    1111        @sed '/^#==>$$/,$$d' waf > waf2 && mv waf2 waf
    1212        @chmod +x waf
  • README.md

    r8605361 re9eaaf4  
    6969The latest version of the documentation can be found at:
    7070
    71   http://aubio.org/documentation
     71  https://aubio.org/documentation
    7272
    7373Installation and Build Instructions
     
    7676A number of distributions already include aubio. Check your favorite package
    7777management system, or have a look at the [download
    78 page](http://aubio.org/download).
     78page](https://aubio.org/download).
    7979
    8080aubio uses [waf](https://waf.io/) to configure, compile, and test the source:
     
    108108
    109109  - Paul Brossier, _[Automatic annotation of musical audio for interactive
    110     systems](http://aubio.org/phd)_, PhD thesis, Centre for Digital music,
     110    systems](https://aubio.org/phd)_, PhD thesis, Centre for Digital music,
    111111Queen Mary University of London, London, UK, 2006.
    112112
     
    115115
    116116  - P. M. Brossier and J. P. Bello and M. D. Plumbley, [Real-time temporal
    117     segmentation of note objects in music signals](http://aubio.org/articles/brossier04fastnotes.pdf),
     117    segmentation of note objects in music signals](https://aubio.org/articles/brossier04fastnotes.pdf),
    118118in _Proceedings of the International Computer Music Conference_, 2004, Miami,
    119119Florida, ICMA
    120120
    121121  -  P. M. Brossier and J. P. Bello and M. D. Plumbley, [Fast labelling of note
    122      objects in music signals] (http://aubio.org/articles/brossier04fastnotes.pdf),
     122     objects in music signals] (https://aubio.org/articles/brossier04fastnotes.pdf),
    123123in _Proceedings of the International Symposium on Music Information Retrieval_,
    1241242004, Barcelona, Spain
     
    128128-----------------------------
    129129
    130 The home page of this project can be found at: http://aubio.org/
     130The home page of this project can be found at: https://aubio.org/
    131131
    132132Questions, comments, suggestions, and contributions are welcome. Use the
  • python/demos/demo_pysoundcard_play.py

    r8605361 re9eaaf4  
    1111    samplerate = f.samplerate
    1212
    13     s = Stream(sample_rate = samplerate, block_length = hop_size)
     13    s = Stream(samplerate = samplerate, blocksize = hop_size)
    1414    s.start()
    1515    read = 0
  • python/demos/demo_pysoundcard_record.py

    r8605361 re9eaaf4  
    99    hop_size = 256
    1010    duration = 5 # in seconds
    11     s = Stream(block_length = hop_size)
    12     g = sink(sink_path, samplerate = s.sample_rate)
     11    s = Stream(blocksize = hop_size, channels = 1)
     12    g = sink(sink_path, samplerate = int(s.samplerate))
     13    print s.channels
    1314
    1415    s.start()
    1516    total_frames = 0
    16     while total_frames < duration * s.sample_rate:
    17         vec = s.read(hop_size)
    18         # mix down to mono
    19         mono_vec = vec.sum(-1) / float(s.input_channels)
    20         g(mono_vec, hop_size)
    21         total_frames += hop_size
     17    try:
     18        while total_frames < duration * s.samplerate:
     19            vec = s.read(hop_size)
     20            # mix down to mono
     21            mono_vec = vec.sum(-1) / float(s.channels[0])
     22            g(mono_vec, hop_size)
     23            total_frames += hop_size
     24    except KeyboardInterrupt, e:
     25        print "stopped after", "%.2f seconds" % (total_frames / s.samplerate)
     26        pass
    2227    s.stop()
    2328
  • src/musicutils.h

    r8605361 re9eaaf4  
    5757the International Conference on Digital Audio Effects (DAFx-00), pages 37–44,
    5858Uni- versity of Verona, Italy, 2000.
    59   (<a href="http://profs.sci.univr.it/%7Edafx/Final-Papers/ps/Bernardini.ps.gz">
    60   ps.gz</a>)
     59  (<a href="http://www.cs.princeton.edu/courses/archive/spr09/cos325/Bernardini.pdf">
     60  pdf</a>)
    6161
    6262 */
  • wscript

    r8605361 re9eaaf4  
    138138            DEVROOT += "/Developer/Platforms/iPhoneOS.platform/Developer"
    139139            SDKROOT = "%(DEVROOT)s/SDKs/iPhoneOS.sdk" % locals()
     140            ctx.env.CFLAGS += [ '-fembed-bitcode' ]
    140141            ctx.env.CFLAGS += [ '-arch', 'arm64' ]
    141142            ctx.env.CFLAGS += [ '-arch', 'armv7' ]
Note: See TracChangeset for help on using the changeset viewer.