Dynamically Updating Date Labels in App

This post, my first in over year, improves upon the code in my last post. The crime-mapping app I posted in May 2017 included some kludgy, manual handling of dates, such that the categorical, rolling 12-month year labels were hard-coded, rather than dynamically updated with the data’s most recent date. This post presents a much smoother way to automatically assign those date labels, which obviates the need for a user to correctly, manually update that field when updating the data. 

Read More

Parallel Processing for Memory-Intensive Maps and Graphics

Rendering graphics typically takes R some time, so if you’re going to be producing a large number of similar graphics, it makes sense to leverage R's parallel processing capabilities. However, if you’re looking to collect and return the graphics together in a sorted object – as we were in the previous post on animated choropleths – there’s a catch. R has to keep the whole object in random access memory (RAM) during parallel processing. As the number of graphics files increases, you risk exceeding the available RAM, which will cause parallel processing to slow dramatically (or crash). In contrast, a good, old-fashioned sequential for loop can write updates to the object to the global environment after each iteration, clearing RAM for the next iteration. Paradoxically, then, parallel processing can take longer than sequential processing in this situation. In the case of the animated choropleths in the previous post, parallel processing took 21 minutes, whereas sequential processing took 11 minutes.

This post presents code to combine the efficiency and speed of parallel processing with the RAM-clearing benefits of sequential processing when generating graphics.

Read More

Parallel Processing

If you've needed to perform the same sequence of tasks or analyses over multiple units, you've probably found for loops helpful. They aren't without their challenges, however - as the number of units increases, the processing time increases. For large data sets, the processing time associated with a sequential for loop can become so cumbersome and unwieldy as to be unworkable. Parallel processing is a really nice alternative in these situations. It makes use of your computer's multiple processing cores to run the for loop code simultaneously across your list of units. This post presents code to: 

  1. Perform an analysis using a conventional for loop.
  2. Modify this code for parallel processing.

To illustrate these approaches, I'll be working with the New Orleans, LA Postal Service addresses data set from the past couple of posts. You can obtain the data set here, and code to quickly transform it for these analyses here.

The question we'll be looking to answer with these analyses is: which areas in and around New Orleans have exhibited the greatest growth in the past couple of years?

Read More

Big Data Wrangling: Reshaping from Long to Wide

Reshaping datasets from wide to long in R tends to work smoothly regardless of the size of the dataset, but reshaping from long to wide can break (or take so long you wonder if it's stopped working) with large data sets. The threshold at which this problem arises will vary depending upon your system and memory allocation. I find that it occurs with datasets of ~25,000 rows or more with the default heap size and with datasets of ~1 million rows or more with maximum heap allocation.

This post shares an alternative approach that resolves size-related limitations when reshaping large datasets from long to wide. The essence of the solution is this: subset the data based upon the levels of repeated assessments, rename the measured variable to something unique to that assessment, and then merge the data for the separate assessments back together. Although the reshape() and dcast() code for this task is more concise, the enclosed subsetting approach doesn’t stall for very large datasets. 

In the event that you’ve found yourself waiting for minutes or hours while R chews on a reshape() or dcast() command, hoping that the program hadn’t silently stalled out, there's hope!

Read More

Automatically-Updating Date Field

Suppose you’re working with data that includes dates (e.g., birth dates, start or stop dates for a project or customer account, graduation dates, etc.) and you want to flag those observations whose dates meet some criterion related to today's date. For example, you’re working with customer account data, and you want to identify those customer accounts that were closed in the past year. To flag recently-closed accounts, you need to test the account close date against a date representing one year ago today, but given that time keeps passing, the date that represents one year ago today keeps changing, too. If you’re going to be re-running your code periodically, you’ll want the program to automatically update the test date based upon the current date. (The alternative is manually updating the test date each time you run the program, which is inefficient and also susceptible to error.) 

This post presents some clean, simple code that will update a date-related field using today's date as the reference point.

Read More

Automating Repetitive Tasks for Efficiency: For Loops

It’s incredibly useful to be able to automate an analysis or set of analyses that you want to perform multiple times in exactly the same way. For example, if you’re working in industry, you might want to perform analyses that allow you to draw separate conclusions about the performance of individual stores, regions, products, customers, or employees. If you’re working in academia, you might want to separately examine multiple, different dependent variables. Frequently, this may entail several distinct steps, such as subsetting the data, performing the analysis or set of analyses, generating well-labeled output, etc.

This post presents one approach for feeding R a list of units to loop through, and then iteratively performing the same set of tasks for each unit.

Read More