Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Overview: We are seeking a talented Data Engineer with expertise in Apache Spark, Python / Java and distributed systems. The ideal candidate will be skilled in creating and managing data pipelines using AWS. Key Responsibilities: Design, develop, and implement data pipelines for ingesting, transforming, and loading data at scale. Utilise Apache Spark for data processing and analysis. Utilise AWS services (S3, Redshift, EMR, Glue) to build and manage efficient data pipelines. Optimise data pipelines for performance and scalability, considering factors like partitioning, bucketing, and caching. Write efficient and maintainable Python code. Implement and manage distributed systems for data processing. Collaborate with cross-functional teams to understand data requirements and deliver optimal solutions. Ensure data quality and integrity throughout the data lifecycle. Qualifications: Proven experience with Apache Spark and Python / Java. Strong knowledge of distributed systems. Proficiency in creating data pipelines with AWS. Excellent problem-solving and analytical skills. Ability to work independently and as part of a team. Bachelor's degree in Computer Science, Information Technology, or a related field (or equivalent experience). Proven experience in designing and developing data pipelines using Apache Spark and Python. Experience with distributed systems concepts (Hadoop, YARN) is a plus. In-depth knowledge of AWS cloud services for data engineering (S3, Redshift, EMR, Glue). Familiarity with data warehousing concepts (data modeling, ETL) is preferred. Strong programming skills in Python (Pandas, NumPy, Scikit-learn are a plus). Experience with data pipeline orchestration tools (Airflow, Luigi) is a plus. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Preferred Qualifications: Experience with additional AWS services (e.g., AWS Glue, AWS Lambda, Amazon Redshift). Familiarity with data warehousing and ETL processes. Knowledge of data governance and best practices. Have a good understanding of the oops concept. Hands-on experience with SQL database design Experience with Python, SQL, and data visualization/exploration tools
Posted 5 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
ZS is a place where passion changes lives. As a management consulting and technology firm focused on improving life and how we live it, our most valuable asset is our people. Here you’ll work side-by-side with a powerful collective of thinkers and experts shaping life-changing solutions for patients, caregivers and consumers, worldwide. ZSers drive impact by bringing a client first mentality to each and every engagement. We partner collaboratively with our clients to develop custom solutions and technology products that create value and deliver company results across critical areas of their business. Bring your curiosity for learning; bold ideas; courage and passion to drive life-changing impact to ZS. Our most valuable asset is our people . At ZS we honor the visible and invisible elements of our identities, personal experiences and belief systems—the ones that comprise us as individuals, shape who we are and make us unique. We believe your personal interests, identities, and desire to learn are part of your success here. Learn more about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. What you’ll do: Lead end to end projects using cloud technologies to solve complex business problems Provide technology expertise to maximize value for clients and project teams Drive strong delivery methodology to ensure projects are delivered on time, within budget and to client’s satisfaction Ensure technology solutions are scalable, resilient, and optimized for performance and cost Guide coach and mentor project team members for continuous learning and professional growth Demonstrate expertise, facilitation, and strong interpersonal skills in internal and client interactions Collaborate with ZS experts to drive innovation and minimize project risks Work globally with team members to ensure a smooth project delivery Bring structure to unstructured work for developing business cases with clients Assist ZS Leadership with business case development, innovation, thought leadership and team initiatives What you’ll bring: Candidates must either be in their junior year of a Bachelor's degree or in their first year of a Master's degree specializing in Business Analytics, Computer Science, MIS, MBA, or a related field with academic excellence 5+ years of consulting experience in leading large-scale technology implementations Strong communication skills to convey technical concepts to diverse audiences Significant supervisory, coaching, and hands on project management skills Extensive experience with major cloud platforms like AWS, Azure and GCP Deep knowledge of enterprise data management, advanced analytics, process automation, and application development Familiarity with industry- standard products and platforms such as Snowflake, Databricks, Redshift, Salesforce, Power BI, Cloud. Experience in delivering projects using agile methodologies Additional skills: Capable of managing a virtual global team for the timely delivery of multiple projects Experienced in analyzing and troubleshooting interactions between databases, operating systems, and applications Travel to global offices as required to collaborate with clients and internal project teams Perks & Benefits: ZS offers a comprehensive total rewards package including health and well-being, financial planning, annual leave, personal growth and professional development. Our robust skills development programs, multiple career progression options and internal mobility paths and collaborative culture empowers you to thrive as an individual and global team member. We are committed to giving our employees a flexible and connected way of working. A flexible and connected ZS allows us to combine work from home and on-site presence at clients/ZS offices for the majority of our week. The magic of ZS culture and innovation thrives in both planned and spontaneous face-to-face connections. Travel: Travel is a requirement at ZS for client facing ZSers; business needs of your project and client are the priority. While some projects may be local, all client-facing ZSers should be prepared to travel as needed. Travel provides opportunities to strengthen client relationships, gain diverse experiences, and enhance professional growth by working in different environments and cultures. Considering applying? At ZS, we're building a diverse and inclusive company where people bring their passions to inspire life-changing impact and deliver better outcomes for all. We are most interested in finding the best candidate for the job and recognize the value that candidates with all backgrounds, including non-traditional ones, bring. If you are interested in joining us, we encourage you to apply even if you don't meet 100% of the requirements listed above. ZS is an equal opportunity employer and is committed to providing equal employment and advancement opportunities without regard to any class protected by applicable law. To Complete Your Application: Candidates must possess or be able to obtain work authorization for their intended country of employment.An on-line application, including a full set of transcripts (official or unofficial), is required to be considered. NO AGENCY CALLS, PLEASE. Find Out More At: www.zs.com
Posted 5 days ago
40.0 years
0 Lacs
Murud, Maharashtra, India
On-site
Overview Stats Perform is the market leader in sports tech. We provide the most trusted sports data to some of the world's biggest organizations, across sports, media, and broadcasting. Through the latest AI technologies and machine learning, we combine decades' worth of data with the latest in-game happenings. We then offer coaches, teams, professional bodies, and media channels around the world, access to the very best data, content, and insights. In turn, improving how sports fans interact with their favorite sports teams and competitions. How do they use it? Media outlets add a little magic to their coverage with our stats and graphics packages. Sportsbooks can offer better predictions and more accurate odds. The world's top coaches are known to use our data to make critical team decisions. Sports commentators can engage with fans on a deeper level, using our stories and insights. Anywhere you find sport, Stats Perform is there. However, data and tech are only half of the package. We need great people to fuel the engine. We succeeded thanks to a team of amazing people. They spend their days collecting, analyzing, and interpreting data from a wide range of live sporting events. If you combine this real-time data with our 40-year-old archives, elite journalists, camera operators, copywriters, the latest in AI wizardry, and a host of 'behind the scenes' support staff, you've got all the ingredients to make it a magical experience! Responsibilities We are seeking a highly analytical and detail-oriented Business Analyst to join our team. This role is crucial in transforming raw data into actionable insights, primarily through the development of interactive dashboards and comprehensive data analysis. The successful candidate will bridge the gap between business needs and technical solutions, enabling data-driven decision-making across the organization. Key Responsibilities Requirements Gathering: Collaborate with stakeholders across various departments to understand their data needs, business challenges, and reporting requirements. Data Analysis: Perform in-depth data analysis to identify trends, patterns, and anomalies, providing clear and concise insights to support strategic initiatives. Dashboard Development: Design, develop, and maintain interactive and user-friendly dashboards using leading data visualization tools (e.g., Tableau, Power BI) to present key performance indicators (KPIs) and business metrics. Data Modeling & Querying: Utilize SQL to extract, transform, and load data from various sources, ensuring data accuracy and integrity for reporting and analysis. Reporting & Presentation: Prepare and deliver compelling reports and presentations of findings and recommendations to both technical and non-technical audiences. Data Quality: Work closely with IT and data teams to ensure data quality, consistency, and accessibility. Continuous Improvement: Proactively identify opportunities for process improvements, data efficiency, and enhanced reporting capabilities. Stakeholder Management: Build strong relationships with business users, understanding their evolving needs and providing ongoing support for data-related queries. Desired Qualifications Education: Bachelor's degree in Business, Finance, Economics, Computer Science, Information Systems, or a related quantitative field. Experience: Proven experience (typically 3+ years) as a Business Analyst, Data Analyst, or similar role with a strong focus on data analysis and dashboarding. Data Visualization Tools: Proficiency in at least one major data visualization tool (e.g., Tableau, Microsoft Power BI, Looker). SQL: Strong proficiency in SQL for data extraction, manipulation, and analysis from relational databases. Data Analysis: Excellent analytical and problem-solving skills with the ability to interpret complex datasets and translate them into actionable business insights. Communication: Exceptional written and verbal communication skills, with the ability to explain technical concepts to non-technical stakeholders. Business Acumen: Solid understanding of business processes and key performance indicators. Attention to Detail: Meticulous attention to detail and a commitment to data accuracy. Nice-to-Have Experience with statistical programming languages (e.g., Python with Pandas/NumPy) for advanced data manipulation and analysis. Familiarity with data warehousing concepts and cloud data platforms (e.g., Snowflake, AWS Redshift, Google BigQuery). Experience with advanced Excel functions (e.g., Power Query, Power Pivot). Certification in relevant data visualization tools. Why work at Stats Perform? We love sports, but we love diverse thinking more! We know that diversity brings creativity, so we invite people from all backgrounds to join us. At Stats Perform you can make a difference, by using your skills and experience every day, you'll feel valued and respected for your contribution. We take care of our colleagues We like happy and healthy colleagues. You will benefit from things like Mental Health Days Off, ‘No Meeting Fridays,’ and flexible working schedules. We pull together to build a better workplace and world for all. We encourage employees to take part in charitable activities, utilize their 2 days of Volunteering Time Off, support our environmental efforts, and be actively involved in Employee Resource Groups. Diversity, Equity, and Inclusion at Stats Perform By joining Stats Perform, you'll be part of a team that celebrates diversity. A team that is dedicated to creating an inclusive atmosphere where everyone feels valued and welcome. All employees are collectively responsible for developing and maintaining an inclusive environment. That is why our Diversity, Equity, and Inclusion goals underpin our core values. With increased diversity comes increased innovation and creativity. Ensuring we're best placed to serve our clients and communities. Stats Perform is committed to seeking diversity, equity, and inclusion in all we do. With increased diversity comes increased innovation and creativity. Ensuring we're best placed to serve our clients and communities. Stats Perform is committed to seeking diversity, equity, and inclusion in all we do.
Posted 5 days ago
6.0 - 8.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Qualification 6-8 years of good hands on exposure with Big Data technologies – pySpark (Data frame and SparkSQL), Hadoop, and Hive Good hands on experience of python and Bash Scripts Good understanding of SQL and data warehouse concepts Strong analytical, problem-solving, data analysis and research skills Demonstrable ability to think outside of the box and not be dependent on readily available tools Excellent communication, presentation and interpersonal skills are a must Hands-on experience with using Cloud Platform provided Big Data technologies (i.e. IAM, Glue, EMR, RedShift, S3, Kinesis) Orchestration with Airflow and Any job scheduler experience Experience in migrating workload from on-premise to cloud and cloud to cloud migrations Good to have: Role Develop efficient ETL pipelines as per business requirements, following the development standards and best practices. Perform integration testing of different created pipeline in AWS env. Provide estimates for development, testing & deployments on different env. Participate in code peer reviews to ensure our applications comply with best practices. Create cost effective AWS pipeline with required AWS services i.e S3,IAM, Glue, EMR, Redshift etc. Experience 6 to 8 years Job Reference Number 13024
Posted 5 days ago
12.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Qualification BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Role Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience 10 to 18 years Job Reference Number 12895
Posted 5 days ago
14.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
Qualification We are seeking a highly experienced and dynamic Technical Project Manager to lead and manage our service engagements. The candidate will possess a strong technical background, exceptional project management skills, and a proven track record of successfully delivering large-scale IT projects. You will be responsible for leading cross-functional teams, managing client relationships, and ensuring projects are delivered on time, within budget, and to the highest quality standards. 14+ years of experience in the role of managing and implementation of high-end software products, combined with technical knowledge in Business Intelligence (BI) and Data Engineering domains 5+ years of exeperience in project management with strong leadership and team management skills Hands-on with project management tools (e.g., Jira, Rally, MS Project) and strong expertise in Agile methodologies (certifications such as SAFe, CSM, PMP or PMI-ACP is a plus) Well versed with tracking project performance using appropriate metrics, tools and processes to successfully meet short/long term goals Rich experience interacting with clients, translating business needs into technical requirements, and delivering customer-focused solutions Exceptional verbal and written communication skills, with the ability to present complex concepts to techincal / non-technical stakeholders alike Strong understanding of BI concepts (reporting, analytics, data warehousing, ETL) leveraging expertise in tools such as Tableau, Power BI, Looker, etc. Knowledge of data modeling, database design, and data governance principles Proficiency in Data Engineering technologies (e.g., SQL, Python, cloud-based data solutions/platforms like AWS Redshift, Google BigQuery, Azure Synapse, Snowflake, Databricks) is a plus Role This is a multi-dimensional and multi-functional role. You will need to be comfortable reporting program status to executives, as well as diving deep into technical discussions with internal engineering teams and external partners. Act as the primary point of contact for stakeholders and customers, gathering requirements, managing expectations, and delivering regular updates on project progress Manage and mentor cross-functional teams, fostering collaboration and ensuring high performance while meeting project milestones Drive Agile practices (e.g., Scrum, Kanban) to ensure iterative delivery, adaptability, and continuous improvement throughout the project lifecycle Identify, assess, and mitigate project risks, ensuring timely resolution of issues and adherence to quality standards. Maintain comprehensive project documentation, including status reports, roadmaps, and post-mortem analyses, to ensure transparency and accountability Define the project and delivery plan including defining scope, timelines, budgets, and deliverables for each assignment Capable of doing resource allocations as per the requirements for each assignment Experience 14 to 18 years Job Reference Number 12929
Posted 5 days ago
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are hiring a Data Engineer for Pune/Hyderabad/Bangalore. Experience: 6+ Years Designation: Senior Software Engineer/Lead Software Engineer –Data Engineer Skill Tech stack: AWS Data Engineer, Python, PySpark, SQL, Data Pipeline, AWS, AWS Glue, Lambda JD: 6+ years of experience in data engineering, specifically in cloud environments like AWS. Proficiency in Python and PySpark for data processing and transformation tasks. Solid experience with AWS Glue for ETL jobs and managing data workflows. Hands-on experience with AWS Data Pipeline (DPL) for workflow orchestration. Strong experience with AWS services such as S3, Lambda, Redshift, RDS, and EC2. Technical Skills: Deep understanding of ETL concepts and best practices. Strong knowledge of SQL for querying and manipulating relational and semi-structured data. Experience with Data Warehousing and Big Data technologies, specifically within AWS. Additional Skills: Experience with AWS Lambda for serverless data processing and orchestration. Understanding of AWS Redshift for data warehousing and analytics. Familiarity with Data Lakes, Amazon EMR, and Kinesis for streaming data processing. Knowledge of data governance practices, including data lineage and auditing. Familiarity with CI/CD pipelines and Git for version control. Experience with Docker and containerization for building and deploying applications. Design and Build Data Pipelines: Design, implement, and optimize data pipelines on AWS using PySpark, AWS Glue, and AWS Data Pipeline to automate data integration, transformation, and storage processes. ETL Development: Develop and maintain Extract, Transform, and Load (ETL) processes using AWS Glue and PySpark to efficiently process large datasets. Data Workflow Automation: Build and manage automated data workflows using AWS Data Pipeline, ensuring seamless scheduling, monitoring, and management of data jobs. Data Integration: Work with different AWS data storage services (e.g., S3, Redshift, RDS) to ensure smooth integration and movement of data across platforms. Optimization and Scaling: Optimize and scale data pipelines for high performance and cost efficiency, utilizing AWS services like Lambda, S3, and EC2. Interested or know someone who fits? Send your resume to gautam@mounttalent.com.
Posted 6 days ago
3.0 - 6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Qualification OLAP, Data Engineering, Data warehousing, ETL Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake Experience in writing and troubleshooting SQL programming or MDX queries Experience of working on Linux Experience in Microsoft Analysis services (SSAS) or OLAP tools Tableau or Micro strategy or any BI tools Expertise of programming in Python, Java or Shell Script would be a plus Role Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for prospects regarding technical issues during POV stage. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience 3 to 6 years Job Reference Number 10350
Posted 6 days ago
5.0 - 10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Qualification Required Proven hands-on experience on designing, developing and supporting Database projects for analysis in a demanding environment. Proficient in database design techniques – relational and dimension designs Experience and a strong understanding of business analysis techniques used. High proficiency in the use of SQL or MDX queries. Ability to manage multiple maintenance, enhancement and project related tasks. Ability to work independently on multiple assignments and to work collaboratively within a team is required. Strong communication skills with both internal team members and external business stakeholders Added Advanatage Hadoop ecosystem or AWS, Azure or GCP Cluster and processing Experience working on Hive or Spark SQL or Redshift or Snowflake will be an added advantage. Experience of working on Linux system Experience of Tableau or Micro strategy or Power BI or any BI tools will be an added advantage. Expertise of programming in Python, Java or Shell Script would be a plus Role Roles & Responsibilities Be frontend person of the world’s most scalable OLAP product company – Kyvos Insights. Interact with senior-most technical and business people of large enterprises to understand their big data strategy and their problem statements in that area. Create, present, align customers with and implement solutions around Kyvos products for the most challenging enterprise BI/DW problems. Be the Go-To person for customers regarding technical issues during the project. Be instrumental in reading the pulse of the big data market and defining the roadmap of the product. Lead a few small but highly efficient teams of Big data engineers Efficient task status reporting to stakeholders and customer. Good verbal & written communication skills Be willing to work on off hours to meet timeline. Be willing to travel or relocate as per project requirement Experience 5 to 10 years Job Reference Number 11078
Posted 6 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are looking for a Data Engineer with strong experience in cloud platforms (AWS & Azure) , Scala programming , and a solid understanding of data architecture and governance frameworks . You will play a key role in building, optimizing, and maintaining scalable data pipelines and systems while ensuring data quality, security, and compliance across the organization. Key Responsibilities Data Engineering & Development Design and develop reliable, scalable ETL/ELT data pipelines using Scala , SQL , and orchestration tools. Integrate and process structured, semi-structured, and unstructured data from various sources (APIs, databases, flat files, etc.). Develop solutions on AWS (e.g., S3, Glue, Redshift, EMR) and Azure (e.g., Data Factory, Synapse, Blob Storage). Cloud & Infrastructure Build cloud-native data solutions that align with enterprise architecture standards. Leverage IaC tools (Terraform, CloudFormation, ARM templates) to deploy and manage infrastructure. Monitor performance, cost, and security posture of data environments in both AWS and Azure. Data Architecture & Governance Collaborate with data architects to define and implement logical and physical data models. Apply data governance principles including data cataloging , lineage tracking , data privacy , and compliance (e.g., GDPR) . Support the enforcement of data policies and data quality standards across data domains. Collaboration & Communication Work cross-functionally with data analysts, scientists, architects, and business stakeholders to support data needs. Participate in Agile ceremonies and contribute to sprint planning and reviews. Maintain clear documentation of pipelines, data models, and data flows. Required Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. 3–6 years of experience in data engineering or data platform development. Hands-on experience with AWS and Azure data services. Proficient in Scala for data processing (e.g., Spark, Kafka Streams). Strong SQL skills and familiarity with distributed systems. Experience with orchestration tools (e.g., Apache Airflow, Azure Data Factory).
Posted 6 days ago
2.0 years
0 Lacs
India
Remote
Job description L1 Support – Data Engineering (Remote, South India) Location: Permanently based in South India (any city) – non-negotiable Work Mode: Remote | 6 days/week | 24x7x365 support (rotational shifts) Salary Range - Between INR 2.5 to 3 Lacs Per Annum Experience: 2 years Language: English proficiency mandatory ; Hindi is a plus About the Role We're looking for an experienced and motivated L1 Support Engineer – Data Engineering to join our growing team. If you have solid exposure to AWS , SQL , and Python scripting , and you're ready to thrive in a 24x7 support environment—this role is for you! What You’ll Do Monitor and support AWS services (S3, EC2, CloudWatch, IAM) Handle SQL-based issue resolution and data analysis Run and maintain Python scripts ; Shell scripting is a plus Support ETL pipelines and data workflows Monitor Apache Airflow DAGs and resolve basic issues Collaborate with cross-functional and multicultural teams What We’re Looking For B.Tech or MCA preferred , but candidates with a Bachelor’s degree in any field and the right skillset are welcome to apply. 2 years of Data Engineering Support or similar experience Strong skills in AWS , SQL , Python , and ETL processes Familiarity with data warehousing (Amazon Redshift or similar) Ability to work rotational shifts in a 6-day, 24x7 environment Excellent communication and problem-solving skills English fluency is required ; Hindi is an advantage
Posted 6 days ago
6.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Summary: We are looking for a skilled and motivated Software Engineer with strong experience in data engineering and ETL processes. The ideal candidate should be comfortable working with any object-oriented programming language, possess strong SQL skills, and have hands-on experience with AWS services like S3 and Redshift. Experience in Ruby and working knowledge of Linux are a plus. Key Responsibilities: Design, build, and maintain robust ETL pipelines to handle large volumes of data. Work closely with cross-functional teams to gather data requirements and deliver scalable solutions. Write clean, maintainable, and efficient code using object-oriented programming and SOLID principles. Optimize SQL queries and data models for performance and reliability. Use AWS services (S3, Redshift, etc.) to develop and deploy data solutions. Troubleshoot issues in data pipelines and perform root cause analysis. Collaborate with DevOps/infra teams for deployment, monitoring, and scaling data jobs. Required Skills: 6+ years of experience in Data Engineering. Programming : Proficiency in any object-oriented language (e.g., Java, Python, etc.) Bonus : Experience in Ruby is a big plus. SQL : Moderate to advanced skills in writing complex queries and handling data transformations. AWS : Must have hands-on experience with services like S3 and Redshift . Linux : Familiarity with Linux-based systems is good to have. Preferred Qualifications: Experience working in a data/ETL-focused role. Familiarity with version control systems like Git. Understanding of data warehouse concepts and performance tuning.
Posted 6 days ago
80.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title Associate Data Engineer (Internship Program to Full-time Employee) Job Description For more than 80 years, Kaplan has been a trailblazer in education and professional advancement. We are a global company at the intersection of education and technology, focused on collaboration, innovation, and creativity to deliver a best in class educational experience and make Kaplan a great place to work. Our offices in India opened in Bengaluru in 2018. Since then, our team has fueled growth and innovation across the organization, impacting students worldwide. We are eager to grow and expand with skilled professionals like you who use their talent to build solutions, enable effective learning, and improve students’ lives. The future of education is here and we are eager to work alongside those who want to make a positive impact and inspire change in the world around them. The Associate Data Engineer at Kaplan North America (KNA) within the Analytics division will work with world class psychometricians, data scientists and business analysts to forever change the face of education. This role is a hands-on technical expert who will help implement an Enterprise Data Warehouse powered by AWS RA3 as a key feature of our Lake House architecture. The perfect candidate possesses strong technical knowledge in data engineering, data observability, Infrastructure automation, data ops methodology, systems architecture, and development. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in a fast-paced environment understanding the business requirements and implementing data & reporting solutions. Above all you should be passionate about working with big data and someone who loves to bring datasets together to answer business questions and drive change Responsibilities You design, implement, and deploy data solutions. You solve difficult problems generating positive feedback. Build different types of data warehousing layers based on specific use cases Lead the design, implementation, and successful delivery of large-scale, critical, or difficult data solutions involving a significant amount of work Build scalable data infrastructure and understand distributed systems concepts from a data storage and compute perspective Utilize expertise in SQL and have a strong understanding of ETL and data modeling Ensure the accuracy and availability of data to customers and understand how technical decisions can impact their business’s analytics and reporting Be proficient in at least one scripting/programming language to handle large volume data processing. 30-day notification period preferred Requirements In-depth knowledge of the AWS stack (RA3, Redshift, Lambda, Glue, SnS). Experience in data modeling, ETL development and data warehousing. Effective troubleshooting and problem-solving skills Strong customer focus, ownership, urgency and drive. Excellent verbal and written communication skills and the ability to work well in a team Preferred Qualification Proficiency with Airflow, Tableau & SSRS Location Bangalore, KA, India Additional Locations Employee Type Employee Job Functional Area Systems Administration/Engineering Business Unit 00091 Kaplan Higher ED At Kaplan, we recognize the importance of attracting and retaining top talent to drive our success in a competitive market. Our salary structure and compensation philosophy reflect the value we place on the experience, education, and skills that our employees bring to the organization, taking into consideration labor market trends and total rewards. All positions with Kaplan are paid at least $15 per hour or $31,200 per year for full-time positions. Additionally, certain positions are bonus or commission-eligible. And we have a comprehensive benefits package, learn more about our benefits here. Diversity & Inclusion Statement Kaplan is committed to cultivating an inclusive workplace that values diversity, promotes equity, and integrates inclusivity into all aspects of our operations. We are an equal opportunity employer and all qualified applicants will receive consideration for employment regardless of age, race, creed, color, national origin, ancestry, marital status, sexual orientation, gender identity or expression, disability, veteran status, nationality, or sex. We believe that diversity strengthens our organization, fuels innovation, and improves our ability to serve our students, customers, and communities. Learn more about our culture here. Kaplan considers qualified applicants for employment even if applicants have an arrest or conviction in their background check records. Kaplan complies with related background check regulations, including but not limited to, the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. There are various positions where certain convictions may disqualify applicants, such as those positions requiring interaction with minors, financial records, or other sensitive and/or confidential information. Kaplan is a drug-free workplace and complies with applicable laws.
Posted 6 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Cloud and AWS Expertise: In-depth knowledge of AWS services related to data engineering: EC2, S3, RDS, DynamoDB, Redshift, Glue, Lambda, Step Functions, Kinesis, Iceberg, EMR, and Athena. Strong understanding of cloud architecture and best practices for high availability and fault tolerance. Data Engineering Concepts : Expertise in ETL/ELT processes, data modeling, and data warehousing. Knowledge of data lakes, data warehouses, and big data processing frameworks like Apache Hadoop and Spark. Proficiency in handling structured and unstructured data. Programming and Scripting: Proficiency in Python, Pyspark and SQL for data manipulation and pipeline development. Expertise in working with data warehousing solutions like Redshift.
Posted 6 days ago
0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 6 days ago
6.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Open Location - Indore, Noida, Gurgaon, Bangalore, Hyderabad, Pune Job Description 6-9 years experience working on Data engineering & ETL/ELT processes, data warehousing, and data lake implementation with AWS services or Azure services. Hands on experience in designing and implementing solutions like creating/deploying jobs, Orchestrating the job/pipeline and infrastructure configurations Expertise in designing and implementing pySpark and Spark SQL based solutions Design and implement data warehouses using Amazon Redshift, ensuring optimal performance and cost efficiency. Good understanding of security, compliance, and governance standards. Roles & Responsibilities Design and implement robust and scalable data pipelines using AWS or Azure services Drive architectural decisions for data solutions on AWS, ensuring scalability, security, and cost-effectiveness. Hands-on experience of Develop and deploy ETL/ELT processes using Glue/Azure data factory, Lambda/Azure functions, Step function/Azure logic apps/MWAA, S3 and Lake formation from various data sources. Strong Proficiency in pySpark, SQL, Python. Proficiency in SQL for data querying and manipulation. Experience with data modelling, ETL processes, and data warehousing concepts. Create and maintain documentation for data pipelines, processes, and following best practices. Knowledge of various Spark Optimization technique, Monitoring and Automation would be a plus. Participate in code reviews and ensure adherence to coding standards and best practices. Understanding of data governance, compliance, and security best practices. Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills – with understanding on stakeholder mapping Mandatory Skills - AWS OR Azure Cloud, Python Programming, SQL, Spark SQL, Hive, Spark optimization techniques and Pyspark. Share resume at sonali.mangore@impetus.com with details (CTC, Expected CTC, Notice Period)
Posted 6 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Responsibilities: ✅Build and optimize scalable data pipelines using Python, PySpark, and SQL. ✅Design and develop on AWS stack (S3, Glue, EMR, Athena, Redshift, Lambda). ✅Leverage Databricks for data engineering workflows and orchestration. ✅Implement ETL/ELT processes with strong data modeling (Star/Snowflake schemas). ✅Work on job orchestration using Airflow, Databricks Jobs, or AWS Step Functions. ✅Collaborate with agile, cross-functional teams to deliver reliable data solutions. ✅Troubleshoot and optimize large-scale distributed data environments. Must-Have: ✅4–6+ years in Data Engineering. ✅Hands-on experience in Python, SQL, PySpark, and AWS services. ✅Solid Databricks expertise. ✅Experience with DevOps tools: Git, Jenkins, GitHub Actions. ✅Understanding of data lake/lakehouse/warehouse architectures. Good to Have: ✅AWS/Databricks certifications. ✅Experience with data observability tools (Monte Carlo, Datadog). ✅Exposure to regulated domains like Healthcare or Finance. ✅Familiarity with streaming tools (Kafka, Kinesis, Spark Streaming). ✅Knowledge of modern data concepts (Data Mesh, Data Fabric). ✅Experience with visualization tools: Power BI, Tableau, QuickSight.
Posted 6 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job description Job Name: Senior Data Engineer - IICS Years of Experience: 5 Job Description: We are looking for a skilled and motivated Senior Data Engineer to join our data integration and analytics team. The ideal candidate will have hands-on experience with Informatica IICS, AWS Redshift, Python scripting, and Unix/Linux systems. You will be responsible for building and maintaining scalable ETL pipelines to support business intelligence and analytics needs. A strong passion for continuous learning, problem-solving, and enabling data-driven decision-making is highly valued. Primary Skills: Informatica IICS,AWS Secondary Skills: Python,Unix/Linux Role Description: We are looking for a Senior Data Engineer to lead the design, development, and management of scalable data platforms and pipelines. This role demands a strong technical foundation in data architecture, big data technologies, and database systems (both SQL and NoSQL), along with the ability to work across functional teams to deliver robust, secure, and high-performing data solutions. Role Responsibility: Design, develop, and maintain end-to end data pipelines and infrastructure. Translate business and functional requirements into scalable, well documented technical solutions. Build and manage data flows across structured and unstructured data sources, including streaming and batch integrations. Ensure data integrity and quality through automated validations, unit testing, and robust documentation. Optimize data processing performance and manage large datasets efficiently Collaborate closely with stakeholders and project teams to align data solutions with business objectives. Implement and maintain security and privacy protocols to ensure safe data handling. Lead development environment setup and configuration of tools and services. Mentor junior data engineers and contribute to continuous improvement and automation initiatives. Coordinate with QA and UAT teams during testing and release phases. Role Requirement: Strong proficiency in SQL (including procedures, performance tuning, and analytical functions). Solid understanding of data warehousing concepts, including dimensional modeling and SCDs. Hands-on experience with scripting languages (Shell / PowerShell). Familiarity with Cloud and Big data technologies. Experience working with relational, non-relational databases, and data streaming systems. Proficiency in data profiling, validation, and testing practices. Excellent problem-solving, communication (written and verbal), and documentation skills. Exposure to Agile methodologies and CI/CD practices. Self-motivated, adaptable, and capable of working in a fast-paced environment. Additional Requirements Overall 5 years and 3+ years of hands-on experience with Informatica IICS (Cloud Data Integration, Application Integration). Strong proficiency in AWS Redshift and writing complex SQL queries. Solid programming experience in Python for scripting, data wrangling, and automation. Experience with version control tools like Git and CI/CD workflows. Knowledge of data modeling and data warehousing concepts. Prior experience with data lakes and big data technologies is a plus.
Posted 6 days ago
3.0 years
0 Lacs
India
Remote
Are you a talented Data Scientist (includes AI/ML Researcher, AI/ML Engineer, Data Engineer, ML Ops Engineer, QA Engineer with AI/ML focus, NLP Engineer, LLM Engineer) either, Looking for your next big challenge working remotely OR Employed , but open to offers from elite US companies to work remotely? Submit your resume to our GlobalPros.ai’s, an exclusive community of the world’s top pre-vetted developers dedicated to precisely matching you with our US employers. Globlpros.ai is followed internationally by over 13,000 employers, agencies and the world’s top developers. We are currently searching for a full-time AI/ML developer (includes AI/ML Researcher, AI/ML Engineer, Data Engineer, Data Scientist, ML Ops Engineer, QA Engineer with AI/ML focus, NLP Engineer) to work remotely for our US employer clients. What We Offer: Competitive Compensation . Compensation is negotiable and commensurate with your experience and expertise. Pre-vetting so you’re 2x more likely to be hired . Recent studies by Indeed and LinkedIn show pre-vetted candidates like you are twice as likely to be hired. Shortlist competitive advantage . Our machine learning technology matches you precisely to job requirements and because your pre-vetted ensures you're shortlisted ahead of other candidates. Personalized career support . Free one-on-one career counseling and interview prep to help guarantee you succeed. Anonymity . If you’re employed but open to offers, your profile is anonymous and is not available on our website or otherwise online. When matched with our clients, your profile is anonymous until you agree to be interviewed. So there’s no risk in submitting your resume now. We're Looking For: Experience . Must have at least 3 years of experience . Role . AI/ML developer (includes AI/ML Researcher, AI/ML Engineer, Data Engineer, Data Scientist, ML Ops Engineer, QA Engineer with AI/ML focus, NLP Engineer) Skills . TensorFlow, PyTorch, Scikit-learn Python, Java, C++, R, AWS, Azure, GCP, (SQL, NoSQL, Hadoop, Spark, Docker, Kubernetes, AWS Redshift, Google BigQuery. Willing to work full-time . (40 hours per week) . Available for an hour of assessment testing . Being deeply-vetted with a data enhanced resume and matched precisely by our machine learning algorithms substantially increases the probability of being hired quickly, at higher compensation levels over unvetted candidates. It's your substantial competitive advantage in a crowded job market.
Posted 6 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Role : Database Engineer Location : Remote Notice Period : 30 Days Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently.
Posted 6 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process, SQL, Databricks
Posted 6 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
AWS Data Engineer Design and build scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Develop efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Create and manage applications using Python, SQL, Databricks, and various AWS technologies. Automate repetitive tasks and build reusable frameworks to improve efficiency. Skill Proficiency Level expected AWS Data Engineer - AWS Glue, Amazon Redshift, S3 ETL Process, SQL, Databricks
Posted 6 days ago
12.0 years
0 Lacs
India
Remote
Job Title: Senior Solution Architect – Data & Cloud Experience: 12+ Years Location: Hybrid / Remote Employment Type: Full-time About Company: We are a data and analytics firm that provides the strategies, tools, capability and capacity that businesses need to turn their data into a competitive advantage. USEReady partners with cloud and data ecosystem leaders like Tableau, Salesforce, Snowflake, Starburst and Amazon Web Services, and has been named Tableau partner of the year multiple times. Headquartered in NYC, the company has 450 employees across offices in the U.S., Canada, India and Singapore and specializes in financial services. USEReady’s deep analytics expertise, unique player/coach approach and focus on fast results makes the company a perfect partner for a cloud-first, digital world. About the Role: We are looking for a highly experienced Senior Solution Architect to join our Migration Works practice, specializing in modern data platforms and visualization tools. The ideal candidate will bring deep technical expertise in Tableau, Power BI, AWS, and Snowflake, along with strong client-facing skills and the ability to design scalable, high-impact data solutions. You will be at the forefront of driving our AI driven migration and modernization initiatives, working closely with customers to understand their business needs and guiding delivery teams to success. Key Responsibilities: Solution Design & Architecture Lead the end-to-end design of cloud-native data architecture using AWS, Snowflake, and Azure stack. Translate complex business requirements into scalable and efficient technical solutions. Architect modernization strategies for legacy BI systems to cloud-native platforms. Client Engagement Conduct technical discussions with enterprise clients and stakeholders to assess needs and define roadmap. Act as a trusted advisor during pre-sales and delivery phases, showcasing technical leadership and consultative approach. Migration & Modernization Design frameworks for data platform migration (from on-premise to cloud), data warehousing, and analytics transformation. Support estimation, planning, and scoping of migration projects. Team Leadership & Delivery Oversight Guide and mentor delivery teams across geographies, ensuring solution quality and alignment to client goals. Support delivery by providing architectural oversight and resolving design bottlenecks. Conduct technical reviews, define best practices, and uplift the team’s capabilities. Required Skills & Experience: 15+ years of progressive experience in data and analytics, with at least 5 years in solution architecture roles. Strong hands-on expertise in: Tableau And Power BI – dashboard design, visualization architecture, and migration from legacy BI tools. AWS – S3, Redshift, Glue, Lambda, and data pipeline components. Snowflake – Architecture, Snowconvert, data modeling, security, and performance optimization. Experience in migrating legacy platforms (e.g., Cognos, BO, Qlik) to modern BI/Cloud-native stacks like Tableau and Power BI. Proven ability to interface with senior client stakeholders, understand business problems, and propose architectural solutions. Strong leadership, communication, and mentoring skills. Familiarity with data governance, security, and compliance in cloud environments. Preferred Qualifications: AWS/Snowflake certifications are a strong plus. Exposure to data catalog, lineage tools, and metadata management. Knowledge of ETL/ELT tools such as Talend, Informatica, or dbt. Prior experience working in consulting or fast-paced client services environments. What We Offer: Opportunity to work on cutting-edge AI led cloud and data migration projects. A collaborative and high-growth environment with room to shape future strategy. Access to learning programs, certifications, and technical leadership exposure.
Posted 6 days ago
0.0 - 18.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Ref #: W143709 Department: Data Analytics City: Bangalore State/Province: Karnataka Location: India Pay Range Max Pay Range Min Company Description Ralph Lauren Corporation (NYSE:RL) is a global leader in the design, marketing and distribution of premium lifestyle products in five categories: apparel, accessories, home, fragrances, and hospitality. For more than 50 years, Ralph Lauren's reputation and distinctive image have been consistently developed across an expanding number of products, brands and international markets. The Company's brand names, which include Ralph Lauren, Ralph Lauren Collection, Ralph Lauren Purple Label, Polo Ralph Lauren, Double RL, Lauren Ralph Lauren, Polo Ralph Lauren Children, Chaps, among others, constitute one of the world's most widely recognized families of consumer brands. At Ralph Lauren, we unite and inspire the communities within our company as well as those in which we serve by amplifying voices and perspectives to create a culture of belonging, ensuring inclusion, and fairness for all. We foster a culture of inclusion through: Talent, Education & Communication, Employee Groups and Celebration. Position Overview We are looking for an experienced Manager – Data Delivery to lead our data engineering initiatives, drive data-driven decision-making, and collaborate closely with stakeholders. The ideal candidate should have strong expertise in data engineering, data visualization (preferably using MicroStrategy ), cloud technologies ( AWS ), and stakeholder management. This role is ideal for a data engineering leader who can bridge the gap between technical execution and business strategy, ensuring impactful data solutions that drive business success. Essential Duties & Responsibilities Lead and manage data engineering teams to design, develop, and optimize scalable data pipelines. Drive end-to-end data solutions, from ingestion to transformation and reporting. Oversee data visualization efforts, ensuring effective dashboards and reports in MicroStrategy or similar BI tools. Collaborate with business teams to understand data needs and translate them into actionable insights. Support Delivery Architect and implement data solutions on AWS & Databricks Ensure data governance, security, and compliance in alignment with business requirements. Manage stakeholder expectations, communicate technical concepts effectively, and influence decision-making. Run workshops with Business and align Business stakeholders to optimal solutions Optimize and maintain ETL processes for performance, scalability, and cost efficiency. Drive best practices for data modeling, metadata management, and data quality assurance. Stay up to date with industry trends and emerging technologies to enhance the data ecosystem. Mentor and develop team members, fostering a high-performance data engineering culture. Contribute to hiring and talent development strategies to build a strong data team. Experience, Skills & Knowledge 14-18 years of experience in data engineering, BI, and cloud technologies. Proficiency in AWS data services (S3, Redshift, Glue, Athena, EMR, Kinesis, Lambda etc.). Well versed with Databricks Hands-on experience with ETL development, data pipelines, and data modeling. Familiarity with SQL, Python, and Spark Exposure to modern data architectures and cloud-based analytics. Strong understanding of data visualization principles with BI tools, preferably using MicroStrategy Excellent stakeholder management skills with the ability to drive alignment and influence decisions. Experience in leading teams (15+ team size), project delivery, and working in an Agile environment. Strong problem-solving skills and ability to balance technical and business priorities. Experience in managing distributed teams across multiple geographies, ensuring seamless collaboration and delivery. Strong communication skills verbal, written and visual presentations AWS certification (preferred). SAFe Certification (preferred).
Posted 6 days ago
0.0 - 5.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Designation: Senior Analyst Level: L2 Experience: 4 to 7 years Location: Chennai Job Description: We are seeking a highly skilled and motivated results-driven Senior Analyst with 4+ years of experience to join a fast-paced collaborative team at LatentView Analytics working in the financial services domain. Responsibilities: Drive measurement strategy and lead E2E process of A/B testing for areas of web optimization such as landing pages, user funnel, navigation, checkout, product lineup, pricing, search and monetization opportunities. Analyze web user behavior at both visitor and session level using clickstream data by anchoring to key web metrics and identify user behavior through engagement and pathing analysis Leverage AI/GenAI tools for automating tasks and building custom implementations Use data, strategic thinking and advanced scientific methods including predictive modeling to enable data-backed decision making for Intuit at scale Measure performance and impact of various product releases Demonstrate strategic thinking and systems thinking to solve business problems and influence strategic decisions using data storytelling. Partner with GTM, Product, Engineering, Design, Engineering teams to drive analytics projects end to end Build models to identify patterns in traffic and user behavior to inform acquisition strategies and optimize for business outcomes Skills: +5 years of experience working in web, product, marketing, or other related analytics fields to solve for marketing/product business problems +4 years of experience in designing and executing experiments (A/B and multivariate) with a deep understanding of the stats behind hypothesis testing Proficient in alternative A/B testing methods like DiD, Synthetic control and other causal inference techniques +5 years of technical proficiency in SQL, Python or R and data visualization tools like tableau +5 years of experience in manipulating and analyzing large complex datasets (e.g. clickstream data), constructing data pipelines (ETL) and working on big data technologies (e.g., Redshift, Spark, Hive, BigQuery) and solutions from cloud platforms and visualization tools like Tableau +3 years of experience in web analytics, analyzing website traffic patterns and conversion funnels +5 years of experience in building ML models (eg: regression, clustering, trees) for personalization applications Demonstrate ability to drive strategy, execution and insights for AI native experiences across the development lifecycle (ideation, discovery, experimentation, scaling) Outstanding communication skills with both technical and non-technical audiences Ability to tell stories with data, influence business decisions at a leadership level, and provide solutions to business problems Ability to manage multiple projects simultaneously to meet objectives and key deadlines Job Snapshot Updated Date 28-07-2025 Job ID J_3917 Location Chennai, Tamil Nadu, India Experience 4 - 7 Years Employee Type Permanent
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough