Introduction to Amazon's Cloud Based Services

Posted by Matt Mitchell on Jul 7, 2020 6:42:28 AM

Migrating your information to a cloud-based infrastructure is a critical first step in digital transformation efforts. Doing so will allow you to scale your technology infrastructure to meet growing demands, eliminate the need to maintain hardware, and allow your company to become more agile.

No more will you have to worry about a coffee spill destroying a server, or the physical security of your mission critical infrastructure.

Moving to the cloud streamlines operations and security, giving you added flexibility and more time to focus on the core of what you do best.

There are a several leading players in cloud-based data infrastructure, including Amazon Web Services (AWS). This overview will provide a reference for what AWS products might best support your efforts and take a look at some of the most valuable options for getting started when working with cloud-based data.

Read More

Topics: Cloud, How Tos, Efficiency

Supervised and Unsupervised Machine Learning Primer

Posted by Matt Mitchell on Jul 3, 2020 6:52:35 AM

Supervised and unsupervised learning algorithms are often the first two ‘families’ of techniques introduced in machine learning classrooms and textbooks. So, what are they?

Read More

Topics: Skillset of Data Analysts, Professional Development, How Tos, Machine Learning

Dashboard and Visualization Design Principals

Posted by Matt Mitchell on Jun 22, 2020 8:38:47 PM

Designing a meaningful dashboard or visualization can be a complex and difficult task.

Outlining how best to display data on top of what metrics to track and highlight is a big ask, and doing it ineffectively can diminish the impact of your analytical insights.

This article will walk you through some design considerations and how to go about implementing your very own dashboard.

Read More

Topics: Professional Development, Data Science Developments, How Tos, Dashboards & Visualization

Modular Jupyter Workflows With Autoreload

Posted by Matt Mitchell on Jun 18, 2020 6:44:57 AM

My first year as a data scientist, I witnessed myself and others retyping the same lines of code and retracing our work time and time again. Perhaps some of this did not warrant concern.

After all, how long does it take to type the standard imports,

1
import pandas as pd
1
import numpy as np
1
import matplotlib.pyplot as plt
1
%matplotlib inline

and the like?

Yet there were also plenty of real concerns, as my colleagues and I performed many of the same tasks repeatedly, filling null values, standardizing column names, and creating dummy variables. Shouldn’t we be able to standardize these rote processes and not have to recode the entire preprocessing pipeline every time?

Even worse, sometimes after a day’s worth of exploratory analysis, fruitful insights would surface, only to realize that the Jupyter notebook you’d been working on was a jumbled mess, having jumped around in the notebook repeatedly, fixing errors and rerunning cells. How on earth are you supposed to now repeat that process?

It’s also funny to me that despite proclaiming the immense value of object orientated programming, none of my instructors pointed out how to practically implement such a philosophy into a daily workflow.

I hope this article helps you sidestep the pitfalls many of us have fallen into in order to develop a more productive and sensible workflow.

Read More

Topics: Skillset of Data Analysts, How Tos, Jupyter, Python, Efficiency

Clicking, Typing, Hovering and Scrolling with Selenium

Posted by Matt Mitchell on Jun 12, 2020 8:32:17 AM

So you've tried to scrape some data from the latest website, only to realize your current tool set of parsing HTML pages no longer suffices.

With the rise of AJAX, many of today's websites (including the likes of Netflix and AirBnB) use React.js or similar frameworks to build interactive interfaces where the DOM itself is updated fluently based on user interactions. This contrasts with older methods of navigating to a new URL and making an additional HTTP request.

In these scenarios, older tools such as BeautifulSoup may not be enough.

Read More

Topics: Professional Development, How Tos, Web Scraping

Assessing Sentiment and Other Insights with Twitter Data

Posted by Matt Mitchell on Jun 9, 2020 8:51:00 AM

How can you use the Twitter API to keep a pulse on your customer base or market trends? From tracking followers to analyzing brand affinity, we’ll take a look at some various techniques that can be leveraged via the Twitter API along with logistic considerations, and regulations surrounding the Twitter API’s term of service.

Read More

Topics: Skillset of Data Analysts, Data Science Developments

Executing Messy Joins

Posted by Matt Mitchell on Jun 4, 2020 8:13:00 AM

In building a data-driven organization, unifying disparate datasets is essential, providing a comprehensive baseline for modeling and analysis.

But joining data together to establish this baseline can be messy.

Read More

Topics: Skillset of Data Analysts, Data Science Developments, How Tos

Data Science Digest 10

Posted by Chisel Analytics on Mar 19, 2020 6:45:00 AM

As a data professional, time is at a premium. Here are some tips and trends you'll want to stay on top of!

Title: Towards open health analytics: our guide to sharing code safely on GitHub

Source: https://towardsdatascience.com/towards-open-health-analytics-our-guide-to-sharing-code-safely-on-github-5d1e018897cb
Author: Fiona Grimm
How: Provides step-by-step instructions and things to consider
When to use this: When preparing to create a GitHub page, especially which may include sensitive data
Why it's helpful: Case study with tips, instructions, checklist and links from someone who has done this before
Suggested application: Contribute to and benefit from the input of the global community
Business impact or insights to be gained: Good reference to provide management who might be resistant or concerned about sharing company code or information

Read More

Topics: Professional Development, Data Science Developments

Data Science Digest 9

Posted by Chisel Analytics on Feb 20, 2020 6:45:00 AM

Keeping up is hard for data scientists to do. Chisel Analytics is happy to help!

Title: Pandas Version 1.0 is Out! Top 4 Features Every Data Scientist Should Know

Source: https://www.analyticsvidhya.com/blog/2020/01/pandas-version-1-top-4-features/
How: Make sure you have the current version of Pandas. If yours is an older version (includes 2.x), please update with
$ pip install --upgrade pandas==1.0.0rc0
Also, "first upgrade to Pandas 0.25 and to ensure your code is working without warnings, before upgrading to pandas 1.0."
When to use this: When you want to: filter and "analyze categorical and text-based features;" do calculations with missing values to generate "null" versus false; present data about the info in your dataframe or markdown tables in a clear fashion; plus more enhancements.
Why it's helpful: Now this widely used library offers: Dedicated DataTypes for strings, New Scalar for Missing Values, Improved Data Information Table, Markdown format for Dataframes.
Suggested application: When sharing information with those not used to working in the datasets or keeping logs for future and quick reference, or running calculations that can incorporate more records by leveraging a "null" value versus "false".
Business impact or insights to be gained: as more real world challenges are faced by data professionals, this open source data analysis/ manipulation tool continues to evolve to provide fast, flexible and expressive data structures for working with relational or labeled data

Read More

Topics: Professional Development, Data Science Developments

Data Science Digest 8

Posted by Chisel Analytics on Feb 6, 2020 6:45:00 AM

Keeping up is hard for data scientists to do. Chisel Analytics is happy to help!

Title: Karate Club consists of state-of-the-art methods to do unsupervised learning on graph structured data

Source: https://github.com/benedekrozemberczki/karateclub and https://karateclub.readthedocs.io/en/latest/notes/introduction.html
How: GitHub installation and documentation for data handling, full list of implemented methods, and datasets.
When to use this: When you need to perform "small-scale graph mining research. First, it provides network embedding techniques at the node and graph level. Second, it includes a variety of overlapping and non-overlapping community detection methods."
Why it's helpful: Incorporates Overlapping Community Detection, Non-Overlapping Community Detection, Neighborhood-Based Node Level Embedding, Structural Node Level Embedding, Attributed Node Level Embedding, and Graph Level Embedding.
Suggested application: Use the clusterings and embeddings for downstream learning. Use case examples include: how well Facebook page clusters and group memberships are aligned, abuse of the platform Twitch, classification of threads on Reddit.
Business impact or insights to be gained: "Only quick and minimal changes to the code are needed when a model performs poorly."

Read More

Topics: Professional Development, Data Science Developments

Chisel Analytics

The Benefits of Analytics

Expand your insights into the opportunities that analytics can offer. Chisel Analytics provides a platform that aims to break down the barriers to building or growing your data science and analytics programs. Our blog, tools and resources help companies, recruiters and data specialists stay informed, stay organized and stay engaged.

Sign up to get content relevant to you:

About Data Science for Analytics and Operations Leaders
What IT Managers Need to Know about Data Science
Recruiting for the Data Science
Data Science Digest

Subscribe Here!

Recent Posts