Reviewers are the worst. always changing things, speakers, cables, turntables, CD players. The system never, ever fully breaks in. The whole review process is broken.
Broken cutters, broken saws, Broken buckles, broken laws, Broken bodies, broken bones, Broken voices on broken phones Take a deep breath, feel like you're chokin', Everything is broken
Every time you leave and go off someplace Things fall to pieces in my faceBroken hands on broken ploughs, Broken treaties, broken vows, Broken pipes, broken tools, People bending broken rules Hound dog howling, bull frog croaking, Everything is broken
|
Let's spend today arguing about "broken" vs. "broke in"
😴💤 |
Present configuration consists of SS electronics from one manufacturer, left on 24/7/365. Believe leaving on SS does make a difference. Took the preamp longer to break in (circa 400 hours). All I have to do is slip in a CD and turn up the volume. No discernible need to "break in" anything. Speakers took about three weeks of listening.
|
Don't forget what's on the other side of "break-in" -- it is break-down. At some point the performance of any device starts deteriorating.
For the neurotic, that means there will be only one day in the life of a device in which it performs at its optimum level. And it is impossible to expect that day for any one component to line up with the optimum day for any of the other components in a system!
For myself, I recognize that I am the biggest variable in my system. The differences I hear are often more likely due to my mental and physical state than anything to do with my system.
|
I concur with @almarg, @hilde45, and @zavato… Control of
independent variables is of the utmost importance in the review process, and
should be implemented as much as possible to achieve reasonably meaningful findings.
In a perfect world, twin copies of the target component or
cable should be used for periodic comparisons of performance throughout the
break-in process… One copy being the full break-in target, and the second one
as a control with “low mileage”. Now suddenly, I hit myself, because I just realized
that I had a perfect opportunity to do that when I examined the Rowland M535
bridged a spell ago… I should have started in stereo mode, and used one of the
two units as a low mileage control, instead of breaking-in the pair as a
bridged set. Oh well, next time I evaluate a bridgeable amp, I’ll apply this
technique for sure.
Would be nice to track voltages, air temperature, and
humidity… Next time I am born I’ll make sure I stay fully sighted, so I can
read measuring equipment… Oh well *Grins!*
On the other hand, I do control the test environment as much
as possible, as follows:
I maintain system configuration to be invariant during
each individual evaluation phase. This means that all components remain the
same; cabling remains the same; usage of AC outlets remains the same; no
equipment has moved around stands; layout of cabling on the floor remains the
same; no furniture has been moved, orientation of window treatments remains the
same. All ancillary equipment is already well
stabilized: In my case, all equipment has been with me between two years (cabling)
and 14 years (CD transport)… I was forgetting equipment support benches (60
years). Break-in process continues 24/7, except for
power-off time during thunderstorms and for discharging capacitors (did this twice).
Did any critical listening at least several hours after the last power-up… More
typically, days or weeks after. I make consistent use of review material… A test
CD contains the same sampling of music tracks that I have used for evaluating
equipment at home, at shows, and at stores for the last 15 years… In addition,
I use several other CDs representative of music genres of interest to me. While I listen to entire CDs during critical
listening, I do concentrate on particular passages that I have known to expose
possible flaws or merits in a review target: harshness from intermodulative
artifacts, pillowing/unspecific bass, harmonic exposure changes in broad treble
to bass arpeggios, transient clarity vs opaqueness, decay complexity, staging/imaging
changes, very low level information, ambient noises, performers’
subvocalizations. I document observations in contemporaneous notes
also logging dates and break-in hours, which when cleaned up form the basis for
diary posts, and in an ancient past, I have integrated into published reviews. Use a break-in tracker spreadsheet… This
maintains break-in status for each day, hours of operation each day, start
time, power down time, total hours count since beginning of project, and
completion date projections. At the end of each project phase, for instance
use of an integrated as a complete integrated, I make minimal changes to start
the next phase, which might be for example feeding the linelevel signal from
the linestage of the integrated into my reference monoblocks… I will use for
this the same pair of well broken in XLR ICs that I have been using for the
last two years from my reference DAC to the monos. I will run this configuration for at least a
couple days before any new critical listening, and will use the same tracks and
passages that I used on the integrated. I will probably need to go back and
forth between full integrated and its linestage subsystem into the monos to derive
a reasonable assessment of the difference. Yes I know, the reintroduction of an
IC will somewhat smear the results. * I would then use the same XLR ICS when I test
the output of my reference DAC into the integrated linestage + amplification
subsystem.
As you might imagine, I can’t examine dozens or even a
handful of components a year this way: It is a very time-consuming process. Never
the less, it is for me a happy labor of love which I enjoy sharing with fellow
lovers of music and sound… Others may feel otherwise.
Regards, G.
|