The ideal candidate should have 3-5 years of experience in implementing scalable and sustainable data engineering solutions using tools such as Databricks, Snowflake, Teradata, Apache Spark, and Python. Your responsibilities will include creating, maintaining, and optimizing data pipelines as workloads transition from development to production for specific use cases. You will be responsible for end-to-end development, including coding, testing, debugging, and deployment. Automation is key, and you will drive the use of modern tools and techniques to automate repetitive data preparation and integration tasks to enhance productivity. You will be mapping data between source systems, data warehouses, and data marts, as well as training counterparts in data pipelining and preparation techniques. Collaboration is essential, and you will interface with other technology teams to extract, transform, and load data from various sources. You will also play a crucial role in promoting data and analytics capabilities to business unit leaders, educating them on leveraging these capabilities to achieve their business goals. You should be proficient in converting SQL queries into Python code running on a distributed system and developing libraries for code reusability. An eagerness to learn new technologies in a fast-paced environment and excellent communication skills are essential for this role. Experience with data pipeline and workflow management tools such as Rundeck and Airflow, AWS cloud services like EC2, EMR, RDS, Redshift, and stream-processing systems like Spark-Streaming would be advantageous.,
Role Overview: As a member of the team, you will be responsible for providing 1st line support for all Ticketmaster alerts and queries. You will also perform on-call duties as part of a global team monitoring the availability and performance of the ticketing systems and APIs used by third-party services. In addition, you will resolve advanced issues and provide advanced troubleshooting for escalations. Your expertise will be crucial in driving continuous improvements to products, tools, configurations, APIs, and processes by sharing learnings, feedback, and design input with internal technical teams and integrators. Independently learning new technologies and mastering Ticketmaster ticketing platforms products and services will be essential in providing full stack diagnostics to determine the root cause of issues. Key Responsibilities: - Provide 1st line support for all Ticketmaster alerts and queries - Perform on-call duty as part of a global team monitoring the availability and performance of ticketing systems and APIs - Resolve advanced issues and provide advanced troubleshooting for escalations - Provide Subject Matter Expertise to cross-functional teams on threat issues - Drive continuous improvements to products, tools, configurations, APIs, and processes - Independently learn new technologies and master Ticketmaster ticketing platforms products and services - Ensure documentation and processes are up to date and suitable for internal stakeholder usage - Work on automation to reduce toil Qualifications Required: - BA/BS degree in computer science or related field or relevant work experience in lieu of degree - Experience with bot detection and blocking systems - Troubleshooting skills ranging from diagnosing low-level request issues to large-scale issues - Proficiency in Bash/Python/Go for operations scripts and text processing - Working knowledge of HTTP protocol, basic web systems, and analysis tools such as Splunk, Kibana/ELK stack, and database products (Oracle/MySQL/DataBricks/Snowflake/etc.) - Experience working with a 24/7 shift-based team - Experience in a global, fast-paced environment, resolving multiple interrupt-driven priorities simultaneously - Strong English language communication skills and ability to collaborate closely with remote team members - Ability to work autonomously while sharing new knowledge with technology teams - Committed, adaptable, and embrace continuous learning and improvement (Note: No additional details of the company were mentioned in the job description.),
You will be responsible for providing 1st line support for all Ticketmaster alerts and queries. Additionally, you will perform on-call duty as part of a global team monitoring the availability and performance of ticketing systems and APIs. Your role will involve resolving advanced issues, providing advanced troubleshooting for escalations, and offering Subject Matter Expertise to cross-functional teams on threats issues. Key Responsibilities: - Provide 1st line support for all Ticketmaster alerts and queries - Perform on-call duty to monitor availability and performance of ticketing systems and APIs - Resolve advanced issues and provide troubleshooting for escalations - Provide Subject Matter Expertise to cross-functional teams on threats issues - Drive continuous improvements to products, tools, APIs, and processes - Independently learn new technologies and master Ticketmaster ticketing platforms Qualifications Required: - BA/BS degree in computer science or related field or relevant work experience - Experience with bot detection and blocking systems - Troubleshooting skills from diagnosing low-level request issues to large-scale issues - Proficiency in Bash/Python/Go for operations scripts and text processing - Working knowledge of HTTP protocol, basic web systems, and analysis tools - Experience working with a 24/7 shift based team - Experience in a global, fast-paced environment - Strong English language communication skills - Ability to collaborate closely with remote team members - Embrace continuous learning and improvement In addition to the above responsibilities and qualifications, you will work on automation to reduce toil. You should be passionate, motivated, resourceful, innovative, forward-thinking, and committed to adapting quickly in a fast-paced environment. Your ability to work autonomously while sharing knowledge with technology teams and embrace continuous learning will be essential for success in this role.,
As a Data Engineer with 3-5 years of experience, your role will involve implementing scalable and sustainable data engineering solutions using tools like Databricks, Snowflake, Teradata, Apache Spark, and Python. You will be responsible for creating, maintaining, and optimizing data pipelines as workloads transition from development to production for specific use cases. Your ownership will extend to end-to-end development tasks, including coding, testing, debugging, and deployment. Your key responsibilities will include: - Driving automation through the use of modern tools and techniques to automate repeatable data preparation and integration tasks, thereby enhancing productivity. - Mapping data across source systems, data warehouses, and data marts. - Training counterparts in data pipelining and preparation techniques to facilitate easier integration and consumption of required data. - Collaborating with other technology teams to extract, transform, and load data from diverse sources. - Promoting data and analytics capabilities to business unit leaders and guiding them on leveraging these capabilities to achieve their business objectives. - Writing SQL queries transformed into Python code running on a distributed system. - Developing reusable code libraries. - Demonstrating eagerness to learn new technologies in a dynamic environment. - Possessing excellent communication skills. Good to have qualifications: - Experience with data pipeline and workflow management tools like Rundeck, Airflow, etc. - Proficiency in AWS cloud services such as EC2, EMR, RDS, and Redshift. - Familiarity with stream-processing systems like Spark-Streaming. If there are any additional details about the company in the job description, please provide them separately.,
As a highly technical Program Manager in the Data Engineering team, your role will involve owning planning, unblocking, and accelerating critical data deliveries. You will work in data management and storage systems like Snowflake, Databricks, and Teradata to measure, track, and manage projects for timely delivery. Your responsibilities will include coordinating across multiple engineering teams, building prototypes for data migration, driving cross-functional programs to optimize performance, and defining and executing programs for delivering data as a product. Key Responsibilities: - Anticipate bottlenecks and make smart trade-offs to ensure the right outcomes - Roll up your sleeves to help teams overcome challenges and deliver on time - Coordinate across multiple engineering teams to deliver holistic designs - Build prototypes and proofs of concept for data migration, transformation, and reporting - Drive cross-functional programs to optimize performance, cost, and reliability of data systems - Define and execute programs for delivering data as a product - Coordinate project intake, estimation, and prioritization - Create and maintain project and program schedules - Identify and communicate with stakeholders regularly Qualifications Required: - 5+ years of Program Management experience and 3+ years working with data - Demonstrable track record of delivering complex data infrastructure and reporting tools - Strong proficiency in project management tools such as Jira, Asana, and Smartsheet - Full competency in project management methodologies like Scrum, Kanban, and Waterfall - Basic proficiency in data management and storage systems like Databricks, Snowflake, and Teradata - Ability to write queries, test API endpoints, and build dashboards - Ability to engage with senior stakeholders and influence across the organization - Ability to innovate and simplify project scope assumptions within constraints (Note: Additional details about the company were not provided in the job description.),
 
                         
                    