Lead Data Architect
- - Data Infrastructure
- Employer paid
OVERVIEW OF RESPONSIBILITIES:
The Lead Data Architect will oversee the department's data integration, including developing a data model, maintaining a data warehouse, and writing scripts for data integration and analysis. They will create and oversee an automated reporting system, supervise and build on the Federation's data pipeline, perform one off data manipulation and analysis, and manage other proprietary systems. This role will work closely and collaboratively with other members of the Data & Analytics division.
The Data, Analytics & Infrastructure Resource's goal is to generate lasting power for the labor movement, by building the Federation's programmatic tools, web development, data systems, and analytics capacity. This team serves a broad range of clients across the labor movement - from other AFL-CIO departments, to AFL-CIO affiliates, to state and local labor bodies. Through investment in central infrastructure, training, and direct service work, the department aims to empower its partners to run stronger and more cost effective political and legislative mobilization, digital, and organizing campaigns.
This position reports to the Deputy Director of Data, Analytics & Infrastructure.
DESCRIPTION OF DUTIES:
● Maintain and build on our data warehouse, the home for almost all of the AFL-CIO's political and organizing data.
● Deploy data pipelines to integrate new sources of data into our central database.
● Build reports and data visualizations, using data from the data warehouse and other sources.
● Produce scalable, replicable code that helps automate repetitive data management tasks.
● Perform one off data manipulation/munging and analysis on a wide variety of political and organizing data.
● Help other Data & Analytics division staff troubleshoot their SQL, Python, or R code. Train other Data & Analytics staff on these skills.
● Provide support for the LAN as needed.
● Other duties as assigned.
● Strong command of relational databases and SQL. Extract, Transform, and Load (ETL) data into a relational database.
● Proficiency with Python or R, especially for data manipulation and analysis. Build sequences of automated processes with these tools.
● General data manipulation skills: read in data, process and clean it, transform and recode it, merge different data sets together, reformat data between wide and long, etc.
● Ability to learn new techniques and troubleshoot code without support, ex. find answers to common programming challenges on Google. In other words, be able to learn on the job.
● Write programming code that is well-documented and stored in a version control system (ex. GitHub, GitLab, Bitbucket).
● Use APIs to push and pull data from various data systems and platforms.
● Experience with advanced data visualization and mapping are helpful, but not required.
● Ability to work long and extended hours as needed.
THIS POSITION HAS BEEN CLOSED! PLEASE CHOOSE ONE OF THE OPTIONS BELOW:
- Search Current Openings
Sign Up For Job Alerts
Follow Us On Social Media