Information about entities in large networks is often represented as high dimensional numerical vectors. This makes calculating various relationships among entities computationally expensive. Ideally, we would like to compress the data we have and still be able to calculate the same relationships. Random projections have proven to be a powerful tool in dimension reduction and this approach simplified many problems . The Graph Intelligence Sciences team at Microsoft recently used this method to efficiently compute products of large matrices associated with Office 365 tenant-level graphs, and thus represent higher-order similarity between vertices in these graphs. In this talk, we will discuss how well some quantities are preserved under the random projection method and introduce new results on approximation quality under cosine similarity. We will also show some cases when the method can fail. Ongoing work with Cassiano Becker.