Gorilla Logic provides nearshore Agile teams to Fortune 500 and SMB companies, bringing unparalleled expertise in the delivery of full-stack web, mobile, and enterprise applications. Our highly collaborative Agile Gorillas are uniquely qualified to implement complex software initiatives. With offices in the United States, Costa Rica, and Colombia, Gorilla Logic helps clients gain competitive advantages to achieve results faster.
Gorilla Logic is looking for a Senior Data Engineer using the AWS Cloud Platform. This is a unique and highly technical role responsible for data analysis within the enterprise. This role works with multiple data sources, data aggregation into data lakes, and data transformation into value. Our environment will require you to work effectively with your teammates, of course. But your real success will be measured by how well you couple critical thinking with self-motivation, enthusiasm, and determination.
*Hiring remote, full-time employees within Colombia, Costa Rica & Mexico
*Paid private insurance and compensated days off
*Align to US-based business hours; potential travel to US-based client sites
*Utilize Zoom video & audio calls to collaborate across locations
*Access to best-in-class technical training
*Endless career paths & growth opportunities within technology consulting!
*Real-time data processing using Kafka
*Dashboard, clickstream, data aggregation and normalization development
*Analysis, organization, and integration of raw data sources
*Build data systems, data models, data processing, and data pipelines
*Work with key client stakeholders to evaluate business needs and priorities
*Data interpretation for trending, patterns, value creation
*Conduct complex data analysis and develop reports
*Enhance data quality, reliability, efficiency, and value
*Bachelor's degree in Computer Science or related field (or equivalent experience)
*5+ years development and/or data engineering experience
*3+ years of experience with AWS Cloud Platform for data services
*3+ years programming experience using Python and SQL
*3+ years of experience with Apache Kafka and data streaming solutions
*Strong experience with Amazon Redshift cloud data warehousing
*Expertise with PySpark and Pandas in cloud data processing
*Experience with Infrastructure as Code implementation (Terraform/Recommender)
*Distributed, Microservices, and Serverless architecture experience
*Thorough understanding of programming fundamentals such as OOP, functional programming, data structures, and algorithm design
*Must have the ability to work in a dynamic, fast-paced environment
*Strong communication skills to interact across within the team and across the business
*Good analytical thinking and problem-solving skills
*Financial services background including billing systems and events
*Experience with containers (Docker and Kubernetes)
*Java development experience
*Experience with distributed compute engines (Apache Spark), cloud based MPP databases (Snowflake, Bigquery, Redshift), and Data Lakes (Azure Data Lake, S3).