why is it taking so long at the moment?
“Long” is a relative term. 5 may seem long compared to 2, but it’s quite happy when it’s coming down from 10.
Every year there are established patterns. Some periods are quieter, like the summer, and some periods get a lot more submissions, like before the holidays. ‘Tis the season.
Just like the weather, we can try to forecast and predict but it’s never an exact science. So far, every year sees a lot more submissions than the previous one. So this holiday season has more than last year.
Last year it was 14 days maximum waiting time, and then we expanded the team significantly.
The fact that we’re only around 6 days now is a good sign. If the team hadn’t grown the way it did we might be at 20 or more days now.
Bottom line, while the goal is not to exceed 7 days, no one can say for 100% certain, depending how each week’s submissions go. What’s clear however is that there appears to be an industry trend.
So the best thing every author can do is.. Plan for it.
E-soundtrax saidIt’s gonna go even longer than that, holiday times approaching!
almost 6 days.
Yes, every year it’s predictable, no surprise, like the tides.
The goal however is not to exceed half the longest wait time from last year. We’ll see how it goes with the submissions in the next few weeks leading up to Xmas.
This brings up other questions for me. Is it possible that rejections are the cause of uploading to the wrong genre? And will a reviewer ever take the liberty of changing it for you? For example, “Classical” and “Cinematic” may be easily confused and a track that isn’t suitable for one may be for the other.
The answer to those questions are (i) No, the track should not be Hard rejected for quality simply based on the category selected. And (ii) it depends – when it’s an irregular occurrence, here and there, reviewers have the liberty to reassign the category, unless it’s observed that there’s an established pattern of chronic miscategorization. In the latter case, submissions may be held pending contact via support and/or ultimately soft rejected with notes to help with categorization.
Edit – just to clarify any confusion, looking further into the records it appears that there was in fact a note for production mix/mastering, that was dispatched, but that was not on the version that you have linked to soundcloud above. It was an earlier, different version that was submitted with the same title, which is why it was not immediately seen here.
Therefore, apologies for any confusion relating to the post above. That said, if you’d still like to clear up the matter, please do communicate with the quality team via support, as noted above.
We’re glad that you posted to the community to get feedback. However it’s important for everyone here to understand that when feedback is invited, forum users should not be inadvertently misled when communicating to the community what the reasons received for the rejection were. In other words, it’s important for everyone to be accurate when relaying any notes obtained from the review team publicly.
The records for this submission were reviewed and nowhere does it say there were any issues with the mix, specifically.
What was relayed by the attending reviewer was that certain passages were not arranged ideally. This is an arrangement issue, not a mix issue. The quality of the mix is perfectly fine, listening to the submission.
However, despite the message dispatched, we can conceivably revisit this case with the Quality Team, as it appears that a soft rejection may have been in order, rather than a hard rejection. Among other things, the preview uploaded had no sound on it at all.. it was just an mp3 with silence throughout.
If you’d like to revisit this submission, please send a message to email@example.com and advise the ticket ID here. We can then close this thread and the matter may be discussed in further detail.
Thanks for your attention and understanding
lutch saidAre you talking about specific quality tools or is their a plugin called quality tools? If you’re talking about certain quality tools are there any examples?
Use quality tools and everything will be fine . Good luck to you .
It’s a language translation thing.
Tools means instruments in russian.
Possibly two things have happened; either they hired an overzealous reviewer (or two) who rejects 99% of tracks on their shifts, or the review team is being pushed to make snap-decisions.
Nope, neither of those things happened.
Regarding the figures, we can only take your proposed estimation as an intended exaggeration, otherwise we can assure you that if any reviewer were anomalously rejecting literally 99% of submissions they process, or even close to that much, it would be noticed pretty quickly in the stats charts and genuinely addressed by the company. Submissions, acceptance and rejections are percentages, and well monitored, per reviewer, just so you know.
When I took a hiatus a few months back, the review time was around 7 days now it’s 16-24 hours, obviously something has changed in terms of the review process.
Does anyone recall the general sentiment when the queues were at 7+ days? If you do, please raise your right hand.
Yes, there have been evident changes in review, that perception is spot on. Specifically, the number one author demand over the past year has been queue wait times, which have been prioritized as the team expanded to meet capacity, and not at the expense of quality.
So now we’re at around a day’s wait for the majority (something many thought we would never do) and the track record for error margin monitoring has stayed indicatively flatlined. And, we still respond to case by case rejection inquiries via support in much the same way.
I hope they address this issue soon, and/or layout a DETAILED description of what they expect production wise.
That was done 3 months ago. On this thread which clarified a lot and was well received. It should probably be stickied again I think.
If the issue is that the review team is being pushed too hard, I’d prefer they go back to a week or more to review a track so they can take their time, rather than having all these great tracks flushed down the toilet.
Do people really want to wait a week again? It’s certainly not the general feedback we’ve gotten so far. Adam, I have to say that your opinion is respectable in essence, but it’s it’s been demonstrated that applying brakes and bringing queues back to 7 days would not solve the perceived inconsistency issue that’s being described here.
Or possibly AudioJungle is reaching a critical-mass where they feel like they already have more than enough music or contributors, like some of the other stock music sites have. I guess all we can do is speculate until someone from Envato actually decides to weigh-in on this issue. Maybe we are all being paranoid, but I don’t think so, I think you can definitely feel this change, whatever it is.
The notion of reviewers rejecting for competitive reasons is, well, simply an erroneous and inaccurate imagination. It is silly and borders on conspiracy. Not all reviewers are authors, and of those who have active author accounts, the majority aren’t authoring actively.
Regardless, every single person on the team has been hired based on merit and especially trustworthiness and integrity. The entire team works together, with shared reviews on numerous item cases before they get rejected.
One thing is for darned sure, every single reviewer hates and regrets making a slip. But we admit our slips internally to the whole team and everyone is made aware them when they happen. I can tell you personally that each reviewer really takes it to heart to avoid making outright review errors.
As far as relating feedback to every single rejection report on the forum, it’s not possible to respond to each one. That is not the pattern we are ever going to establish.
The community discussion and dialogue is always welcome, in any event.
It may seem to you now that “inexplicable” rejections are on the rise, but that may only be a imprecise manifestation emerging due to more people taking advantage of the forum for item discussion. Our recent initiative for more transparency and public rejection address, when warranted, could easily have led toward the assumption that we were meant to give feedback on every thread.
Unfortunately that’s not possible and was disclosed as such. Yet people seem to be taking the chance to post rejections more. That in itself is a good thing, because it stimulates the discussion, allows us to monitor the pulse of the review process from both within and without.
Questioned rejections on the forum, however, are evaluated by both quality management and review staff, and when definitive discrepancies are noted that indicate an error, then the case can certainly be revisited.
That said, the initial item in this thread is not one that meets current acceptance. We know, due to the feedback we got, that a lot of musicians here can perfectly understand why.
Despite this, we’re still looking at ways the system’s Hard Rejection messages can be better conveyed. One worthy idea was to include information inviting item discussion, which was Lemega’s idea.
Ultimately, we’re never going to be claim to be perfect, which is why we’re interested in hearing the feedback, because when we can see that a call falls on the wrong side of the fence, we have no problem admitting it and overturning a rejection.
Case in point – Marb… The bar has been raised, yes indeed. And it was raised to much acclaim and satisfaction by a vocal majority.
With that out of the way, if the echo of your bellow has rung out, we can candidly advise you that your last rejection was noted here, and is in fact one that was admissible. Check your inbox shortly.
Please note, everyone, a single overturned rejection does not a million mistakes make. That would simply be misconceived conjecture. Ultimately, it’s just a matter of trust that the reviewers, many of whom have been empathetic supporters of yours on the boards here, who are also former author colleagues of yours, have it in their minds and in their hearts to do the best job possible as a team. If it’s corruption you think you’ll find, you are 100% looking in the wrong place.
For the record, the team averages are pretty similar in terms of rejection numbers, and the fact is that rejections have actually gone down in recent times.
And guess what, in the very same way that exceptions happen, when an item gets rejected that should not be, there are also exceptions when are item gets accepted that should arguably not have been.
This can impel an observer to conclude that consistency is completely way off, but in fact it’s not because the exceptions are statistically less significant than perceived, objectively.
Because we acknowledge this fact, one of our planned initiatives is to gradually revisit the library’s content, in due time, according to a very specific set of parameters – with the intent of aligning the layers of consistency we’ve had to work with over the years. And that means all of our older portfolios, including all staff’s .
Those past tracks which do not meet current acceptance standards may be revisited and withdrawn from our library in the future.
And no, before anyone gets alarmed or goes off the bend here, that does not mean there’s going to be a free-for-all retroactive rejection campaign.
It simply means that older content which clearly and categorically cannot justifiably remain active by more recent standards, objectively, may get hidden (with a possible option for update), to narrow the gap in consistency that’s accumulated over the years, and further align items that were processed up until today.
Phew. Hope that makes some sense and resonates well enough with folks. If you’ve managed to read through this whole post, thanks for taking the time! Otherwise, let’s back to work and we’ll keep you posted with further updates as they evolve.
Peace out everyone.
Hey guys, It would’t be AudioJungle if it didn’t look like a jungle sometimes, would it!
It appears this confusion has been multiplied because of a couple errors on our parts.
For starters, while the KB terms briefly touch on the notion of subjective titles, that article was written a long time ago and is meant to be interpreted as a general guideline for extreme cases, for all marketplaces. However, due to the nature of distinct media and library indexing practices, each marketplace has evolved to approach the policy slightly differently, and this specific point does not come across in the article’s present form.
Focusing on AJ, to be clear, we interpret the subjectivity of a title in a way that discourages superlative subjective titles.
This means that titles such as “Best Track of 2014”, “The Most Amazing Music Ever”, or “Finest Production in the World”, as examples, should not be accepted by the review team, and alternatives should be requested, suggested, or temporarily set by reviewers for such cases.
However, to be clear, we are not imposing a ban on any specific words per se. (well, except words you don’t tell your mother )
So ultimately, as Phil noted, we can consider the following titles as acceptable, using the terms above:
1. Best Regards, Best Friends Forever, Best Success. OK.
2. Amazing Inspiration, Amazing Days, and yes.. Amazing Grace. OK.
3. Finest Diamonds, Beautiful Worlds, Inspiring Ideas, Ultimate Fight, Epic Glory… Etc etc etc.. OK.
In this sense, admittedly,the rule was essentially followed too strictly by the review team when we asked you to change “Amazing Technology”, Lumen. That title is considered acceptable, and we can change it back for you if you want. No harm done, we can relax.
In conclusion now, what are the lessons here, and next steps?
1. We are going to reclarify this specific aspect of titling with the whole review team immediately.
2. We are already revisiting the KnowledgeBase content in this capacity, so it gets clarified on the next update deploy. (Currently a project in the works, slated for the coming weeks)
3. We are going to align our community and review teams further on the matter to ensure the details of each marketplace are taken into account before any subsequent announcements are made.
4. We’ll welcome any further reports of exceptions or cases where titles or reviews are genuinely incompatible with the information stated above, and we can address concerns on a case by case basis via support.
Our goal is to broaden and reinforce consistency here, where there’s potential for pitfalls, so please send any questions on this topic our way if you are unsure about a title of yours.
Also you are always free to add comments in the Notes to Reviewer field, when you submit an item or update to the queue, if you need clarification.
Thanks everyone for your attention and understanding here, and bona fide apologies for the hiccups.
We’ll edit the item links out now, and unlock the thread now to allow the conversation to continue – as long as it stays on topic and doesn’t descend into proverbial anarchy, s’il vous plait.