Categories
Blog Posts Release Announcements

Method R Workbench 9.4

Yesterday, we released Method R Workbench 9.4.1.0. This article is a tour of some of the 50 new application behavior changes.

Command History Scrolling

In Workbench 9.3, it was a pain to try to find a given report in the output pane. For example, do this:

  1. Run the “Time by line range” action upon a 1,000-line trace file, with 1- as the range. The output will be 1,000 lines.
  2. Run the “Duration by call name” action upon the same file. The output will be about 15 lines.
  3. Run the “Time by line range” action again. Another 1,000 lines.
  4. Now, find the “Duration by call name” report, which now is buried between two 1,000-line reports.

This was a fun design problem. My initial idea was a “table of contents” dialog. It would list the actions that had been run, and then clicking on one would scroll the output pane back to the first line of that action’s output. But I hate dialogs.

It sucks when you hate your own best idea. But that’s what teams are for. Jeff proposed a far better idea: simply link the command history feature with the output pane. So now, finding that “Duration by call name” report is as easy as:

  1. Arrow up and down through the command text field until you get to the action you want to the report for. Then, presto!, there’s its report in the output pane.

We want our software to feel like “PFM” (pure magic). This design element is one of the opportunities you seek to fit that description. It wasn’t easy to implement, given that the size of the output pane is variable (View › Zoom), but that’s our problem to worry about, not yours.

CSV for Importing into Excel

One of the stories I like to tell was when we helped a big company fix an overnight batch job duration problem by exporting our files panel content into CSV that you can copy and paste into Excel. As a result of that engagement, we added a new runnable action called “Details by file, for importing into Excel”. It used our mrls utility. The problem is, mrls is slower and less accurate than mrprofk, which creates the information for the files pane.

There’s no need to be slow and wrong when you can be fast and right. So we eliminated the action in the actions pane, and we added a shift-click feature to the existing “Copy selected file rows to output” button. Now it’s the “Copy selected file rows to output (shift-click for CSV)” button. It’s both accurate and blazing fast.

Sounds

Clearly, most of the information transfer from the Workbench application to your brain occurs through your visual channel. But one day in July 2020, one of our customers said this:

Running mrskew is so fast and the resulting display update so fluid I often cannot even tell if it ran…

After studying the problem a little bit, we decided to begin to use sound as an additional channel for transmitting information from our Workbench application to your brain. Now, when you run a command that might take a while, you’ll get a little thumbs-up or thumbs-down sound when it’s finished. Now you can look away from the screen and still know definitively that your action has finished. We added that feature in August 2020, in Workbench 9.1.

One thing had been bugging me, though: the sounds didn’t really fit in with the other sounds my system makes. On macOS and Windows, there’s actually a system settings option that allows you to specify an alert sound for your system. You get to choose which sound you want, how loud it should play, and which device it should play on. Prior to 9.4, our Workbench application didn’t respect those settings. In 9.4, we do.

It’s nice to fit in.

Undo and Redo

We worked long and hard on undo and redo in this release. Prior to 9.4, undo was restricted to the files pane. You could undo box-checking, and you could undo load and unload operations. We found the feature confusing to use, though, for a variety of reasons. At the same time, we didn’t offer any undo features in the command text field (the text field at the top of the output pane).

So, in 9.4.1, we’ve added good old-fashioned undo and redo with ⌘Z and ⇧⌘Z (Ctrl-Z and Shift-Ctrl-Z on Windows) in the command text field and both the filter fields.

Move to Trash

Sometimes in our Workbench workflow, we come across files that we know we’re never going to want to load again. Those are the files with zero durations, or files with trace level 0 (see our new “Level” column, which reveals each file’s Oracle tracing level). Going to the filesystem browser to delete those files is tedious and dangerous—I hate look-here, but click-there workflows. It’s just easier to select the files you don’t want and delete them, with File › Move to Trash (it’s Move to Recycle Bin on Windows), right in the Workbench application.

It may sound dangerous, but don’t worry, Move to Trash just moves items to your trash can. If you make a mistake, you can still retrieve them.

If you don’t see the Move to Trash (or …Recycle Bin) option in your File menu, then you should update your JDK version to 9+. You can see the version you’re using by clicking Help › Diagnostics.

Label Expressions

In my August 2023 article called “A Design Decision,” I talked about an enhancement that lets you use expressions in --group-label and --select-label values. What we didn’t realize at the time was that, in those expressions, mrskew wouldn’t let you refer to functions you’d defined yourself in the mrskew --init block. In 9.4.1, we’ve fixed that problem.

It’s a neat feature. We use it now in our three histogram RC files (disk.rc, p10.rc, ssd.rc) to make our code more elegant. The ability to use --group-label='label(0)' is the trick. I expect our users to push the feature even harder. See, for example, Jared Still’s amazing --init blocks on Slack.

And More…

In the past few years, we’ve been increasingly required to carve through hundreds or thousands of trace files. We’re talking so many files that referring to “*” on the command line overfills the command line buffer. So each release in 2023 and 2024 has had some features to make managing lots of trace files easier. You can see all about what we’ve done at our Workbench Release Notes page.

I hope you enjoy!

Categories
Blog Posts

Method R Workbench Party Notes 2023-12-01

If you had a chance to join my Workbench Party today, thank you for your time with us. If not, then I hope to see you next time.

Here are a few links that should help fortify what we covered:

  1. “Solving the Unsolvable Performance Problem” white paper (2 pages)
  2. Method R Software workflow (1:54)
  3. Method R software
  4. Introduction to Method R Trace (2:26)
  5. Video introduction to Method R Workbench (5:34)
  6. Demo: airport OS with five timeouts per hour (4:58)
  7. Method R Workbench video tips
  8. Installation instructions for Method R Trace extension for SQL Developer
  9. Installation instructions for Method R Workbench
  10. Documentation pages for all Method R software tools
  11. Method R on Slack
Categories
Blog Posts

How Did You Make mrskew 20× Faster?

A couple of years ago (June 30, 2020), we released Method R Workbench version 9.0.0.66. It had 113 new features and bug fixes. One of those features was case 7800: “mrskew is now 10–20× faster than before.”

We’re prone to sneaking in performance improvements like that. It’s because we, too, use the software we sell, and we don’t like waiting around for answers any more than you do.

mrskew, in case you don’t know, creates flexible, variable-dimension profiles. It’s a skew analyzer for Oracle trace files.

…A what?

It’s a tool that can query across trace files (thousands of them, if that’s how many you have) and answer questions like these:

  1. What kinds of calls dominated the response time of your user’s experience? Imagine for the sake of this example that the answer is “read calls.” How much time did read calls take? How many read calls did your program make?
  2. Were all your read calls the same duration? Or did some take longer than the others? How much time could you save if you eliminated the slowest 10,000 read calls?
  3. How many blocks did the longest read calls read?
  4. What are the file and block IDs of the longest read calls?
  5. Are the slowest read calls associated with a particular file?
  6. Are they associated with a particular SQL statement?
  7. On what line of what trace file can you find information about your longest read call?

Our mrskew tool can answer questions like these and more.

Here are the commands to do it. Don’t let these scare you. You can summon any one of them (or any of 30+ others) with just a click, in our Method R Workbench:

  1. mrskew *.trc
  2. mrskew --name='read' --rc=p10.rc *.trc
  3. mrskew --name='read' --group='"$p3"' --gl='"BLKS/READ"' *.trc
  4. mrskew --name='read' --group='"$p1:$p2"' --gl='"FILE#:BLOCK#"' *.trc
  5. mrskew --name='read' --group='"$p1"' --gl='"FILE"' *.trc
  6. mrskew --name='read' --group='"$sqlid"' --gl='"SQLID"' *.trc
  7. mrskew --name='read' --group='"$base:$line"' --gl='"FILE:LINE"' *.trc

Now, imagine trying to ask 2GB of trace data all these questions. Without mrskew, it would probably take you a day or more to fish the answers out of your trace files (don’t bother looking in AWR or ASH; they’re not there).

A Workbench 8 mrskew execution on 2GB of input takes about 4 minutes. That’s about half an hour to run all seven commands. That’s pretty good compared to a day or two of fishing.

A Workbench 9 mrskew execution on the same input takes only about 12 seconds. That’s less than 2 minutes to answer all the questions I’ve posed here. That’s remarkable.

2019 MacBook Pro (Intel)mrskew *.trc (2GB)
Method R Workbench 8240 seconds (4 minutes)
Method R Workbench 912 seconds
mrskew execution times before and after the Method R Workbench 9 upgrade.

So, an interesting question, then, might be, “How did you do that?”

Well, that’s easy: a long time ago, I hired Jeff Holt.

How did Jeff do it?

Simple. He rewrote mrskew in C.

In Workbench 8, mrskew was a Perl program that I had written in 2009. Perl is admittedly slow, but I was interested in having a program that users could interact with using Perl’s full expression syntax.

mrskew worked really well, and we used it a lot. But it always felt weird that it was so much slower than our other utilities that do even more work (like mrprof). So Jeff, in his spare time, investigated whether he could rewrite mrskew in C. It was no small feat, given that I insisted upon keeping the full Perl expression interface.

One day he surprised me. The new C version of mrskew was passing all our automated tests and we could probably ship it now. I asked him how much faster it was. He said about 20×.

I’m used to this kind of thing with Jeff by now. But still.

The result of Jeff’s investigation is that now we have a skew analysis tool that works just as fast as our other outrageously fast tools, even when you’re battling data by the gigabyte. Today, mrskew is a standard feature of pretty much every performance improvement project we hook into, and we’re grateful that it doesn’t make us wait a long time for the answers we need.

To get a better understanding of what skew is and why it’s important, see chapter 38 of How to Make Things Faster. If you’re interested in more detail about mrskew, visit our mrskew manual page.

Categories
Blog Posts

Coherency Delay

Today, a reader of How to Make Things Faster asked a question about coherency delay (chapter 47): How does coherency delay manifest itself in trace files?

My own awareness of the concept of coherency delay came from studying Neil Gunther’s work, where he defines it mathematically as the strength β that’s required to make your scalability curve fit the Universal Scalability Law (USL) model.

Such a definition can be correct without being especially helpful. It doesn’t answer the question of what kind of Oracle behaviors cause the value of β to change.

think that one answer may be “the gc stuff.” But I’m not sure. One way to find out would be to do the following:

  1. Find a workload that motivates a lot of gc events on a multi-node RAC system.
  2. Fit that configuration’s behavior to Gunther’s USL model.
  3. Run the same workload on a single node.
  4. Fit that configuration’s behavior to USL.
  5. If the value of β decreases significantly from the first workload to the second, then there’s been a reduction in coherency delay, and there is a strong possibility that the gc events were the cause of it.

That may not feel like a particularly practical proof (it would require a lot of assets to conduct), but it’s the best proposal I can think of here at the moment.

The biggest problem with executing this “proof” (whether it really is a proof or not is subject to debate) is that there’s probably not much payout at the end of it. Because what does it really matter if you know how to categorize a particular response time contributor? If a type of event (like “gc calls”) dominates your response time, then who cares what its name or its category is, your job is to reduce its call count and call durations.

(Lots of people have already discovered this peril of categorization when they realized that—oops!—an event that contributes to response time is important, even if the experts have all agreed to categorize it using a disparaging name, such as “idle event.”)

Whether the gc events are aptly categorizable as “coherency delays” or not, my teammates and I have certainly seen RAC configurations where the important user experience durations are dominated by gc events. In fact, our very first Profiler product customer back in roughly the year 2000 was having ~20% of their response times consumed by gc stuff on a two-node cluster back when RAC was called Oracle Parallel Server (OPS).

We solved their problem by helping them optimize their application (indexes, SQL rewrites, etc.), so that their workload ended up fitting comfortably on a single node. When they decommissioned the second node of their 2-node cluster, their gc event counts and response time contributions dropped to 0. And another department got a new computer the company didn’t have to pay for.

The way you’d fix that problem if you could not run single-instance is to make all your buffer cache accesses as local as possible. It’s usually not easy, but that’s the goal. And of course, RAC does a much better job of minimizing gc event durations than OPS did 23 years ago, so globally it’s not as big of a problem as it used to be.

Bottom line, it might be an interesting beer-time conversation to wonder whether gc events are coherency delays or not, but the categorization exercise is only a curiosity. It’s not something you have to do in order to fix real problems.

Categories
Blog Posts

Protected: Method R

This content is password protected. To view it please enter your password below:

Categories
Blog Posts

How Slow Programs are Like Christmas

Method R Corporation makes systems faster. And we make people who make systems faster, faster. We train people to become performance optimization heroes.

Here is my story about why you should be interested in our training.


Slow programs remind me of Christmas when I was a kid. In early December, my parents would put a present for me under our Christmas tree. It would be wrapped up so I couldn’t see what was inside it. The unwrapping would not happen until Christmas morning, December 25. (That was the best-case scenario. If my Dad couldn’t be home because of work, then Christmas morning would come a day or two late.)

So, every day, for nearly a month, I would see that present under the tree and wonder what was in it. 

I’d take any clue I could find. What shape is it? What does it weigh? What does it sound like when I shake it? (Sometimes, my Mom and Dad would prohibit me from shaking it.) No matter how desperate the curiosity, all I could do was guess.

When Christmas came, I’d finally get to tear the paper off, and I would now see, plain as day, what had been in that box the whole time. All the clues and possibilities would collapse into a single reality. Finally, there was no more mystery, no more guessing.

Slow programs are like that. The clues aren’t enough. You guess a lot. But with slow programs, there’s no specially designated morning when your programs reveal their mysteries to you. They just keep irritating you, with no end in sight. You need somebody who knows how to tear the wrapping paper off those programs so you can see what they’re doing wrong. 

That’s the somebody I like being. The role scares a lot of people, but it doesn’t scare me. That’s because I trust my preparation. I know that I have three particular assets that tilt the game in my favor. 

Those assets are knowledgetools, and community. With the knowledge and tools I have, I don’t get stumped very often. But when I do, I have a network of friends who’ll help me out. My friends and I can solve just about anything. These three assets are huge for both my effectiveness and my confidence.

These three assets aren’t just for me. They’re for you, too. 

That’s the aim of my online course called “Mastering Oracle Trace Data.” In this course, I’ve bundled everything you need to claim those three assets for your own:

  1. You’ll learn the details about Oracle traces and the stories they’re trying to tell you. This is your knowledge asset.
  2. You’ll have, for the duration of your choosing, full-feature access to Method R Workbench, the most comprehensive software in the world for mining, managing, and manipulating Oracle traces. This is your tools asset.
  3. You’ll have access to our Slack channel, a global community of Oracle trace enthusiasts that can help you whenever you get stumped. You won’t be alone. You’ll have people who are there for you. This is your community asset. 

You can also fortify all three of your new assets by purchasing office hours for the course. Office hours are Zoom sessions, where you can spend time with my friends and me, discussing your questions about the material.

If you’re interested in becoming a more effective and confident optimizer, you can get started now. Just visit our course page for details.


That’s my story. I hope you’ll contact us if you’re interested.

And if you like stories like this, you’ll find a lot more in my How To Make Things Faster book, available wherever books are sold.

Categories
Blog Posts

A Design Decision

This week, my team at Method R devoted some time to an enhancement request that required an interesting design decision. This post is about the analysis behind that decision.

The enhancement request was for our flagship product called Method R Workbench. It’s an application that people use to mine, manage, and manipulate Oracle trace files.

One of its features, called mrskew, is a tool that allows a Workbench user to filter, group, and sort the raw dbcall and syscall data that Oracle Database processes write to their trace files. You can use mrskew from within the Workbench application, or from a *nix command line.

Here’s an example of a mrskew command. It’s what you would use to find out how long your program spent reading Oracle blocks from secondary storage. It will show you which blocks it read, how long they took, and how many times each block was read:

mrskew --name='db.*read' \
   --group='sprintf("%8d %8d", $p1, $p2)' \
   x_ora_1492.trc

Here’s the output:

sprintf("%8d %8d", $p1, $p2) DURATION      % CALLS     MEAN ... 
---------------------------- -------- ------ ----- --------
                  2        2 0.072918   1.0%    26 0.002805 ...
                 33   698186 0.051940   0.7%     1 0.051940 ...
                 50   339841 0.049261   0.7%     1 0.049261 ...
...

The important thing in this report is the meaning of the $p1 and $p2 variables. The combination of these two variables happens to represent the data block address (the file number and block number) of an Oracle block that was read by some kind of an Oracle read call. It would be nice for the report to tell you that instead of just telling you that the first two columns of numbers are the output of an sprintf function call.

We have a command-line option for that. The ‑‑group-label option lets you assign your own title for the group column. So, with some careful character counting, you could use…

‑‑group-label='    FILE    BLOCK'

…to get exactly the heading you want:

    FILE    BLOCK DURATION      % CALLS     MEAN ... 
----------------- -------- ------ ----- -------- 
       2        2 0.072918   1.0%    26  0.002805 ...
      33   698186 0.051940   0.7%     1  0.051940 ...
      50   339841 0.049261   0.7%     1  0.049261 ... 
...

That makes sense. Now it’s easy to see that Oracle has read one block (file #2, block #2) 26 times, consuming a total of 0.072918 seconds reading it.

The group label fits the output, only because of the careful character counting. The enhancement request was to allow the ‑‑group-label option to take an expression, not just a string. Like this:

--group-label='sprintf("%8s %8s", "FILE", "BLOCK")'

That way, he could print out the header he wanted, perfectly aligned, by just syncing his ‑‑group‑label expression to his ‑‑group expression, without having to count space characters that are literally invisible.

It’s a smart idea. The group label option should have been designed that way from the beginning. We eagerly approved the enhancement request and began thinking about the design.

When we thought it through, we ended up with two different ideas about how we could implement this new idea:

  1. Redefine ‑‑group‑label to take an expression instead of a string. mrskew will calculate the value of the expression before printing the column label.
  2. Create a new option, say, ‑‑new‑group‑label, that takes an expression as its argument. And leave ‑‑group‑label as it is.

The first idea is how the enhancement request was worded. The second idea entered our minds because the first idea creates a compatibility problem: if we change the spec of the existing ‑‑group‑label option, it will break some existing mrskew scripts. For example, these will work in Workbench 9.2.5:

--group-label=FILE
--group-label="FILE BLOCK"

But if we redefine ‑‑group‑label to take an expression instead of a string, then these won’t work anymore. People will need to quote their string expressions like this:

--group-label='"FILE"' 
--group-label='"FILE BLOCK"'

In the end, we decided to redefine the existing option and live with the compatibility breach.

The way we make decisions like this is that we create strenuous arguments for each idea. Here are some of the arguments we considered en route to our decision.

First, the customer experience (cognitive expenditure).

Everyone who participated in the debate had the customer experience foremost in mind. But how can we objectively measure “customer experience”? How do you structure a scientific debate about the superiority of one experience over another?

One way to do it is to measure cognitive expenditure—the amount of mental effort that a user has to invest to get the desired outcome from our software. We want to minimize cognitive expenditure, to maximize a customer’s return on investment of effort.

We began by realizing that responding to this enhancement request with one of our two ideas would necessarily force the user into one of two new regimes:

  1. The syntax of ‑‑group-label has changed.
  2. There’s a new ‑‑new-group-label option.

In regime 1, our users would have to learn the new syntax. That’s a cognitive expenditure. But it’s a one-time expenditure, which is good. The new syntax would be consistent with the existing ‑‑group syntax, which is actually a cognitive savings for our users over what we have now. However, if a customer had saved any scripts that used the old syntax, then the customer would have to convert those scripts. That’s a cognitive expenditure in a loop (one for each script), which is bad.

In regime 2, our users would have to learn about ‑‑new-group‑label, which is a cognitive expenditure. They’d still have to remember (or relearn) about ‑‑group‑label, too, which is a similar cognitive expenditure as the one in regime 1. They wouldn’t have to modify any old scripts, but they would have to make the choice of whether to use ‑‑group‑label or ‑‑new-group‑label, every time they wrote a script in the future. That’s another cognitive expenditure in a loop (one for each script), which is bad.

Second, the developer experience (technical debt).

We also need to consider the developer’s experience. We don’t want to create code that increases technical debt that makes the product unnecessarily difficult to support.

If we redefine ‑‑group-label, there’s no long-term effect to worry about. But if we add ‑‑new‑group‑label to the story, I would expect for people to wonder, why are there two such similar group label options, when one (the one that takes an expression) is clearly superior? And why does the inferior one have the better name?

At some point in the future, I envision wanting to clean up the cruft and have just the one group label feature. Naturally, the right name for it would be ‑‑group‑label. But of course, changing the spec that way would introduce a compatibility problem. To make things worse, this would occur in the future when—one would hope, if our business is growing—such a decision would impact even more customers than it would today. So then, why create the cruft in the first place? It’ll be a worse problem later than it is now.

The question that really seals the deal, is who will the change really affect? It’s really a probability question about customer experiences.

Most users who use the Workbench application will never experience our group label option directly. It’s there for everybody to use, but our Workbench has so many predefined reports built into it, most users never need to touch the group label option for themselves. When they do need to modify it, they’re usually tweaking a report that we’ve predefined for them, which is a low–cognitive-expenditure experience.

In the end, the Method R bears almost the entire cost of the ‑‑group‑label redefinition. It required us to revise:

Most users will experience the benefit of the ‑‑group‑label change, without ever knowing that, once upon a time, it changed. And that’s the way we want it. We want the product to be as smart as possible so that our customers get the most out of their investment, both cognitive and financial.

Categories
Blog Posts Videos

Fill the Glass

Today, Cary Millsap hosted the inaugural episode of his new weekly online session, called “Fill the Glass.” Episode 1 was an ask-me-anything session, covering topics including how to access the Method R workspace in Slack, advice about being your own publisher, and our GitHub repository (available now) for Cary’s and Jeff’s new book, “Tracing Oracle” (available soon).

Visit our “Fill the Glass” page for access to past recordings and future live sessions.

Categories
Blog Posts Videos

Insum Insider: How to Optimize a System with Cary Millsap

Today, Michelle Skamene, Monty Latiolais, and Richard Soule of Insum chatted with me for an hour on their live stream. I hope you’ll enjoy it as much as I did.

Categories
Blog Posts

Newsletter 2022-08-31

Here are some of the things we’ve been been working on.

Click here to subscribe.

New Book: Faster

Recently, I published a new book called Faster: How to Optimize a System. This one’s different from anything I’ve ever written. Faster is a book about how to make things go faster—mostly computers, but actually just about anything.

Faster is the single most concise material on the topic of performance analysis and optimization out there.

Jonah H. Harris, director of AI & ML at The Meet Group

Faster is a how book and a why book, but mostly, it’s a story book. It’s a personal journey that connects the dots about pretty much everything I’ve ever learned since becoming a consultant in 1989.

Cary Millsap has a gift. Yes, he’s brilliant at making things run faster, but his true genius is translating complex problems into simple, powerful ideas.

Liz Wiseman, New York Times bestselling author of Multipliers and Impact Players

Faster is meant to appeal not just to techies, but to anyone who comes into contact with technology. I wrote it in a style that company leaders and project teams and all their users can follow. It contains dozens of short chapters that you can read serially, or that you can enjoy at random.

Cary is one of the best presenters—technology or not—that you will come across, and this clearly comes through in Faster. It’s both satisfying and refreshing to see him explain effortlessly, in normal English, these topics that so many people stumble over.

Guðmundur Jósepsson, director and performance specialist at Inaris

I hope you’ll give Faster a try. If you have a copy, I’d love to hear from you. If you feel good about doing it, I hope you’ll help me spread the word and post a review at Amazon.

I can’t believe I was in my forties the first time I saw how to optimize a system the way Cary and Jeff do it. Now it doesn’t even make sense to me that anyone would try it any other way.

Richard Russell, former 7-Eleven enterprise architect

P.S.: Let me know if you’re interested in an online or live Faster workshop. I can help your whole department think clearly about performance.

New Course: Mastering Oracle Trace Data

Do you have training budget, but you’re tired of the traditional stuff?

We have what you need: nearly 14 hours of material—packaged in a way that’s perfectly suited to our post-apocalyptic travel-prohibited metaverse—it’s our course at Thinkific called Mastering Oracle Trace Data, based on the book of the same name.

It’s got tons of helpful video material, with lots of worked examples, and even a guest speaker or two.

WARNING: You’ll get the most out of this course if you have access to our Method R Workbench software product.

UN-WARNING: The course actually includes a limited-time license for Method R Workbench (and Method R Trace!), so you can experiment and solve problems while you’re experiencing the course.

MORE-WARNING: For the ultimate training experience, consider adding in some online office hours or an on-site visit (not available everywhere).

New Partner: Insum

A few months ago, I reconnected with one of my favorite Method R course alumni, Richard Soule. His employer, Insum, is a world leader in everything Oracle APEX. A few great conversations later, I’m proud to announce a new partnership between Method R Corporation and Insum.

Method R will provide software and training for Insum employees; Insum will promote Method R and help us develop new trace data collection software for APEX developers! I’m really eager to see where this partnership takes us.

New Service: Coaching

Jeff and I are experimenting with a new coaching type of service offering.

For a fixed monthly fee, we can coach your staff and help you solve problems. Of course, we can work with you through Zoom without racking up travel costs. We’re also happy to come see you every once in a while if it makes sense.

Most of our clients get everything they need from us without even having us log into their system. In other situations, we’ll work independently from time to time as a player-coach. Regardless of how the work gets done, we’re happy to teach your team everything we know.

Jeff and I have a lot of experience. We know where a lot of the traps are, and the optimization methods and tools we use give us an unfair advantage over anything else you’ve probably ever seen. Also, we have a comprehensive professional network that spans the globe.

Have a look at https://method-r.com/consulting/coaching/ if you’re interested in the details.

That’s All for Now

That’s all for now; thank you for letting me catch up with you.

—Cary Millsap, President of Method R Corporation