Anyone? This happens to us all the time now, consistently, for over a week.
tasteprofile/status glitch?
tasteprofile/status glitch?
svantana - we are taking a look -- Paul
Analysis not found
Anyone knows why is this happening?
How to simulate reaching rate limit?
Capsule max and min track length
One thing I've discovered as far as the 8 second track is that EchoNest does not return bars, beats or tatums from the analysis. Duration and I think segments do have properties, though.
Warning: no metadata returned for track. 'Track' object has no attribute
These errors are probably due to PATH issues. The PATH is: /usr/local/jdk/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:/usr/X11R6/bin:/root/bin Python is at /usr/bin/python
And python errors contain: /usr/lib64/python2.6/site-packages/echonest/remix/audio.py
So I'm guessing the package was installed in the above directory, which is not the one python is working out of.
python quanta.py beats ../music/Raleigh_Moncrief-Guppies.mp3 Raleigh_Moncrief-Guppies-Beats.mp3
/usr/lib64/python2.6/site-packages/echonest/remix/audio.py:913: DeprecationWarning: object.__new__() takes no parameters
return AudioData.__new__(cls, filename=filename, verbose=verbose, defer=defer, sampleRate=sampleRate)
en-ffmpeg -i "../music/Raleigh_Moncrief-Guppies.mp3" -y -ac 2 -ar 44100 "/tmp/tmpkYawDK.wav"
Computed MD5 of file is e6457a679dd8eac2912ef78ec738b8e6
Probing for existing analysis
/usr/lib64/python2.6/site-packages/echonest/remix/audio.py:96: DeprecationWarning: object.__new__() takes no parameters
return object.__new__(cls, *args, **kwargs)
Warning: no metadata returned for track.
Analysis not found. Uploading...
Warning: no metadata returned for track.
Traceback (most recent call last):
File "quanta.py", line 70, in <module>
main(input_filename, output_filename, units, equal_silence)
File "quanta.py", line 30, in main
audio_file = audio.LocalAudioFile(input_filename)
File "/usr/lib64/python2.6/site-packages/echonest/remix/audio.py", line 936, in __init__
tempanalysis = AudioAnalysis(filename)
File "/usr/lib64/python2.6/site-packages/echonest/remix/audio.py", line 211, in __init__
'confidence': getattr(self.pyechonest_track, attribute + '_confidence')}
AttributeError: 'Track' object has no attribute 'time_signature_confidence'
Warning: no metadata returned for track. 'Track' object has no attribute
Solved. I should have done this myself, but luckily the awesome Ukrainian folks at my hosting provider (WebHostingBuzz) figured it out (Thank you, Oleg). Uninstall remix via pip (we think the install had been interrupted) and reinstall via easy_install. There ya go.
Retrieving Detailed Pitch and Timbre Data from the Echonest API
I've just started using the Echonest API for music analysis purposes and am trying to extract more information from the Echonest API.
Track analyses from the API gather pitch data indicating "the relative dominance of every pitch CLASS in the chromatic scale". For my purposes, however, I'm interested in the relative dominance of every PITCH in the audio file. I suspect Echonest imposes this limitation for economic reasons -- both in terms of processing power and the resulting analysis' file-size -- but am wondering if there a method that can be called to extract "full" pitch--and ideally timbre--data?
Best
Best way to get track profile automatically
I'm writing an app to get as much info as I can about the songs in my collection. Basically I want to get the Track Profile's Audio Summary info for every one of my songs. I'm trying to wrap my head around the EchoNest API and I'm hoping for some hints on how to approach this.
Since I will (eventually) be querying for 18,000+ tracks, I want to do this as automatically as possible, and achieve 100% match accuracy. That is, I don't want to be asked to confirm every song I check. Failing to match is OK, but if it matches something I want to be sure that it's matching the song I queried.
To run a Track Profile query, I know that I need to provide the track's ID. What is the best way to assure 100% accuracy on matches? It seems to me that I should get the ENMFP for each song, then make a Song/Identify query to get the Song ID, then a Rosetta query that to get the Track ID, and then use that Track ID to get the Track Profile.
Is that right?
If so, I'm a little lost on how to go about the Rosetta query. An example query for this would be a great help.
If the above isn't right, ANY pointers would be appreciated. Actually, any pointers would be appreciated in any case.
Thanks.
Best way to get track profile automatically
OK, it turns out that I just need to add "&bucket=audio_summary" to my Song/Identify query, and I get everything I wanted in a single query. And adding more buckets gets me more info. Sweet.
Just in case someone else has a similar question, my query was: http://developer.echonest.com/api/v4/song/identify?api_key=API_KEY&bucket=audio_summary&bucket=song_type
Bug in a certain song?
I'm not sure if this is a bug or if I'm missing something. Or if this is even appropriate here...
I happened to pick one song for my testing, and it gets reported as being from a live album but is listed as type studio.
"title": "Statesboro Blues (Live At The Fillmore East/1971)""song_type": [ "studio"
{
"response": {
"status": {
"version": "4.2",
"code": 0,
"message": "Success"
},
"songs": [
{
"score": 27,
"title": "Statesboro Blues (Live At The Fillmore East/1971)",
"artist_name": "The Allman Brothers Band",
"song_type": [
"studio",
"electric"
],
"tag": 0,
"id": "SOEPWKM144BDC331B4",
"message": "OK (match type 6)",
"artist_id": "ARIO65L1187FB4D25F"
}
]
}
}
Capsule max and min track length
Couldn't remember how I had reconfigured capsule to play the whole track, but it was by substituting the inter variable with the whole track length:
track_length = len(track)
if verbose: print "Computing transitions..."
start = initialize(tracks[0], track_length, trans)
# Middle transitions. Should each contain 2 instructions: crossmatch, playback.
middle = []
[middle.extend(make_transition(t1, t2, track_length, trans))
Bug in a certain song?
Scottes,
Thanks for pointing this out. Yes, this song should be categorized as "live". We'll have our data curation team take a look.
David
Song similarity by song_profile results
I have 10 songs. For each of them I have a audio_summary bucket of data from echonest. By what parameters I can detect similarity of the songs? I tried to compare their 'tempo' and 'time_signature' parameter but it is wrong - really slow songs can have tempo 140+ and vice versa.
Song similarity by song_profile results
Song similarity is a complex, ill-defined problem. We don't currently provide an API for it.
On a small number of tracks, you could define the similarity metric yourself by looking at the timbre, pitch, rhythm, tempo, and subjective attribute data. The summary of that data and the combination of what's important and when, is what makes it particularly difficult.
Octave errors (doubling or halving) with tempo can happen. People also sometimes disagree on the best answer. If you send us a list of TrackIDs for those tracks, we can have a look at what might be going on there.
Song similarity by song_profile results
Okay, song similarity is, but whnat about tempo parameter exactly?
I wanna let users of my program to choose songs by tempo but unfortunately the problem with this parameter exists - slow songs have high tempo and vice versa. Is it a bug or something I dont understand?
Bug in a certain song?
Good to know. This was the first song I started testing with and wanted to make sure that I wasn't doing something wrong or misinterpreting some data.
Bug in a certain song?
Scottes,
That song should be fixed now. Thanks for pointing it out.
David
Check taste profile status
Thanks for you fast reply. Here it is:
{"action":"update","item":{"item_id":"1_1","song_id":"1","song_name":"The Look of Love","artist_id":"1","artist_name":"Dusty Springfield","favorite":true}}
Check taste profile status
Drop the "artist_id" and the "song_id" bits; let us know if that doesn't fix things!