Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
Title: Developer (AWS Engineer) Requirements Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 5 days ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
Title: Developer (AWS Engineer) Requirements Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 5 days ago
5.0 - 6.0 years
0 Lacs
Andhra Pradesh, India
On-site
Title: Developer (AWS Engineer) Requirements Candidate must have 5-6 years of IT working experience with at least 3 years of experience on AWS Cloud environment is preferred Strong hands-on experience Proficient in Node.js and Python Seasoned developers capable of independently driving development tasks Ability to understand the existing system architecture and work towards the target architecture. Experience with data profiling activities, discover data quality challenges and document it. Good to have Experience with development and implementation of large-scale Data Lake and data analytics platform with AWS Cloud platform. Develop and unit test Data pipeline architecture for data ingestion processes using AWS native services. Experience with development on AWS Cloud using AWS services such as Redshift, RDS, S3, Glue ETL, Glue Data Catalog, EMR, PySpark, Python, Lake formation, Airflow, SQL scripts, etc Good to have Experience with building data analytical platform using Databricks (data pipelines), Starburst (semantic layer) on AWS cloud environment Experience with orchestration of workflows in an enterprise environment. Experience working with source code management tools such as AWS Code Commit or GitHub Experience working with Jenkins or any CI/CD Pipelines using AWS Services Working experience with Agile Methodology Experience working with an on-shore / off-shore model and collaboratively work on deliverables. Good communication skills to interact with onshore team.
Posted 5 days ago
40.0 years
0 Lacs
India
On-site
Overview JOB DESCRIPTION Stats Perform is the market leader in sports tech. We provide the most trusted sports data to some of the world's biggest organizations, across sports, media, and broadcasting. Through the latest AI technologies and machine learning, we combine decades' worth of data with the latest in-game happenings. We then offer coaches, teams, professional bodies, and media channels around the world, access to the very best data, content, and insights. In turn, improving how sports fans interact with their favorite sports teams and competitions. How do they use it? Media outlets add a little magic to their coverage with our stats and graphics packages. Sportsbooks can offer better predictions and more accurate odds. The world's top coaches are known to use our data to make critical team decisions. Sports commentators can engage with fans on a deeper level, using our stories and insights. Anywhere you find sport, Stats Perform is there. However, data and tech are only half of the package. We need great people to fuel the engine. We succeeded thanks to a team of amazing people. They spend their days collecting, analyzing, and interpreting data from a wide range of live sporting events. If you combine this real-time data with our 40-year-old archives, elite journalists, camera operators, copywriters, the latest in AI wizardry, and a host of 'behind the scenes' support staff, you've got all the ingredients to make it a magical experience! Responsibilities We are seeking a highly analytical and detail-oriented Business Analyst to join our team. This role is crucial in transforming raw data into actionable insights, primarily through the development of interactive dashboards and comprehensive data analysis. The successful candidate will bridge the gap between business needs and technical solutions, enabling data-driven decision-making across the organization. Key Responsibilities Requirements Gathering: Collaborate with stakeholders across various departments to understand their data needs, business challenges, and reporting requirements. Data Analysis: Perform in-depth data analysis to identify trends, patterns, and anomalies, providing clear and concise insights to support strategic initiatives. Dashboard Development: Design, develop, and maintain interactive and user-friendly dashboards using leading data visualization tools (e.g., Tableau, Power BI) to present key performance indicators (KPIs) and business metrics. Data Modeling & Querying: Utilize SQL to extract, transform, and load data from various sources, ensuring data accuracy and integrity for reporting and analysis. Reporting & Presentation: Prepare and deliver compelling reports and presentations of findings and recommendations to both technical and non-technical audiences. Data Quality: Work closely with IT and data teams to ensure data quality, consistency, and accessibility. Continuous Improvement: Proactively identify opportunities for process improvements, data efficiency, and enhanced reporting capabilities. Stakeholder Management: Build strong relationships with business users, understanding their evolving needs and providing ongoing support for data-related queries. Desired Qualifications Education: Bachelor's degree in Business, Finance, Economics, Computer Science, Information Systems, or a related quantitative field. Experience: Proven experience (typically 3+ years) as a Business Analyst, Data Analyst, or similar role with a strong focus on data analysis and dashboarding. Data Visualization Tools: Proficiency in at least one major data visualization tool (e.g., Tableau, Microsoft Power BI, Looker). SQL: Strong proficiency in SQL for data extraction, manipulation, and analysis from relational databases. Data Analysis: Excellent analytical and problem-solving skills with the ability to interpret complex datasets and translate them into actionable business insights. Communication: Exceptional written and verbal communication skills, with the ability to explain technical concepts to non-technical stakeholders. Business Acumen: Solid understanding of business processes and key performance indicators. Attention to Detail: Meticulous attention to detail and a commitment to data accuracy. Nice-to-Have Experience with statistical programming languages (e.g., Python with Pandas/NumPy) for advanced data manipulation and analysis. Familiarity with data warehousing concepts and cloud data platforms (e.g., Snowflake, AWS Redshift, Google BigQuery). Experience with advanced Excel functions (e.g., Power Query, Power Pivot). Certification in relevant data visualization tools. Why work at Stats Perform? We love sports, but we love diverse thinking more! We know that diversity brings creativity, so we invite people from all backgrounds to join us. At Stats Perform you can make a difference, by using your skills and experience every day, you'll feel valued and respected for your contribution. We take care of our colleagues We like happy and healthy colleagues. You will benefit from things like Mental Health Days Off, ‘No Meeting Fridays,’ and flexible working schedules. We pull together to build a better workplace and world for all. We encourage employees to take part in charitable activities, utilize their 2 days of Volunteering Time Off, support our environmental efforts, and be actively involved in Employee Resource Groups. Diversity, Equity, and Inclusion at Stats Perform By joining Stats Perform, you'll be part of a team that celebrates diversity. A team that is dedicated to creating an inclusive atmosphere where everyone feels valued and welcome. All employees are collectively responsible for developing and maintaining an inclusive environment. That is why our Diversity, Equity, and Inclusion goals underpin our core values. With increased diversity comes increased innovation and creativity. Ensuring we're best placed to serve our clients and communities. Stats Perform is committed to seeking diversity, equity, and inclusion in all we do. With increased diversity comes increased innovation and creativity. Ensuring we're best placed to serve our clients and communities. Stats Perform is committed to seeking diversity, equity, and inclusion in all we do.
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Role: We are seeking a highly skilled and experienced Data Architect with expertise in designing and building data platforms in cloud environments. The ideal candidate will have a strong background in either AWS Data Engineering or Azure Data Engineering, along with proficiency in distributed data processing systems like Spark. Additionally, proficiency in SQL, data modeling, building data warehouses, and knowledge of ingestion tools and data governance are essential for this role. The Data Architect will also need experience with orchestration tools such as Airflow or Dagster and proficiency in Python, with knowledge of Pandas being beneficial. Why Choose Ideas2IT Ideas2IT has all the good attributes of a product startup and a services company. Since we launch our products, you will have ample opportunities to learn and contribute. However, single-product companies stagnate in the technologies they use. In our multiple product initiatives and customer-facing projects, you will have the opportunity to work on various technologies. AGI is going to change the world. Big companies like Microsoft are betting heavily on this (see here and here). We are following suit. What’s in it for you? You will get to work on impactful products instead of back-office applications for the likes of customers like Facebook, Siemens, Roche, and more You will get to work on interesting projects like the Cloud AI platform for personalized cancer treatment Opportunity to continuously learn newer technologies Freedom to bring your ideas to the table and make a difference, instead of being a small cog in a big wheel Showcase your talent in Shark Tanks and Hackathons conducted in the company Here’s what you’ll bring Experience in designing and building data platforms in any cloud. Strong expertise in either AWS Data Engineering or Azure Data Engineering Develop and optimize data processing pipelines using distributed systems like Spark. • Create and maintain data models to support efficient storage and retrieval. Build and optimize data warehouses for analytical and reporting purposes, utilizing technologies such as Postgres, Redshift, Snowflake, etc. Knowledge of ingestion tools such as Apache Kafka, Apache Nifi, AWS Glue, or Azure Data Factory. Establish and enforce data governance policies and procedures to ensure data quality and security. Utilize orchestration tools like Airflow or Dagster to schedule and manage data workflows. Develop scripts and applications in Python to automate tasks and processes. Collaborate with stakeholders to gather requirements and translate them into technical specifications. Communicate technical solutions effectively to clients and stakeholders. Familiarity with multiple cloud ecosystems such as AWS, Azure, and Google Cloud Platform (GCP). Experience with containerization and orchestration technologies like Docker and Kubernetes. Knowledge of machine learning and data science concepts. Experience with data visualization tools such as Tableau or Power BI. Understanding of DevOps principles and practices.
Posted 5 days ago
6.0 - 11.0 years
18 - 33 Lacs
Noida, Pune, Delhi / NCR
Hybrid
Iris Software has been a trusted software engineering partner to several Fortune 500 companies for over three decades. We help clients realize the full potential of technology-enabled transformation by bringing together a unique blend of domain knowledge, best-of-breed technologies, and experience executing essential and critical application development engagements. Tittle - Sr Data Engineer/ Lead Data Engineer Experience - 5-12 years Location - Delhi/NCR, Pune Shift - 12:30- 9:30 pm IST 6+ years of experience in data engineering with a strong focus on AWS services. Proven expertise in: Amazon S3 for scalable data storage AWS Glue for ETL and serverless data integration using Amazon S3, DataSync, EMR, Redshiftfor data warehousing and analytics Proficiency in SQL, Python, or PySpark for data processing. Experience with data modeling, partitioning strategies, and performance optimization. Familiarity with orchestration tools like AWS Step Functions, Apache Airflow, or Glue Workflows. If Intersted, Kindly share your resume on kanika.singh@irissoftware.com Note - Notice Period max 1 month
Posted 5 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Key Responsibilities Design, develop, and deploy interactive and visually impactful Power BI dashboards and reports. Transform raw data into meaningful insights through effective data modeling and DAX measures. Collaborate with business stakeholders to gather reporting requirements and translate them into BI solutions. Optimize Power BI reports for performance and usability using advanced DAX and visualization techniques. Manage and integrate data from various sources such as SQL Server, Excel, SharePoint, and cloud services. Ensure data quality, consistency, and accuracy in all reporting outputs. Implement Row-Level Security (RLS), bookmarks, drill-throughs, and other advanced Power BI features. Develop automated data refresh schedules and monitor report performance on Power BI Service. Required Skills & Qualifications Strong proficiency in DAX, Power Query (M), and building efficient data models. Experience in SQL and working with databases such as SQL Server, MySQL, Redshift, or Azure SQL In-depth understanding of BI concepts, star/snowflake schema, and data warehousing. Familiarity with Power BI Service including workspace management and deployment pipelines. Strong analytical, problem-solving, and data storytelling abilities.
Posted 5 days ago
9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: We are seeking an experienced Data Architect to design, implement, and optimize scalable data solutions on Amazon Web Services (AWS) and / or Google Cloud Platform (GCP). The ideal candidate will lead the development of enterprise-grade data architectures that support analytics, machine learning, and business intelligence initiatives while ensuring security, performance, and cost optimization. Who we are looking for: Primary Responsibilities: Key Responsibilities Architecture & Design: Design and implement comprehensive data architectures using AWS or GCP services Develop data models, schemas, and integration patterns for structured and unstructured data Create solution blueprints, technical documentation, architectural diagrams, and best practice guidelines Implement data governance frameworks and ensure compliance with security standards Design disaster recovery and business continuity strategies for data systems Technical Leadership: Lead cross-functional teams in implementing data solutions and migrations Provide technical guidance on cloud data services selection and optimization Collaborate with stakeholders to translate business requirements into technical solutions Drive adoption of cloud-native data technologies and modern data practices Platform Implementation: Implement data pipelines using cloud-native services (AWS Glue, Google Dataflow, etc.) Configure and optimize data lakes and data warehouses (S3 / Redshift, GCS / BigQuery) Set up real-time streaming data processing solutions (Kafka, Airflow, Pub / Sub) Implement automated data quality monitoring and validation processes Establish CI/CD pipelines for data infrastructure deployment Performance & Optimization: Monitor and optimize data pipeline performance and cost efficiency Implement data partitioning, indexing, and compression strategies Conduct capacity planning and scaling recommendations Troubleshoot complex data processing issues and performance bottlenecks Establish monitoring, alerting, and logging for data systems Skill: Bachelor’s degree in computer science, Data Engineering, or related field 9+ years of experience in data architecture and engineering 5+ years of hands-on experience with AWS or GCP data services Experience with large-scale data processing and analytics platforms AWS Redshift, S3, Glue, EMR, Kinesis, Lambda AWS Data Pipeline, Step Functions, CloudFormation Big Query, Cloud Storage, Dataflow, Dataproc, Pub/Sub GCP Cloud Functions, Cloud Composer, Deployment Manager IAM, VPC, and security configurations SQL and NoSQL databases Big data technologies (Spark, Hadoop, Kafka) Programming languages (Python, Java, SQL) Data modeling and ETL/ELT processes Infrastructure as Code (Terraform, CloudFormation) Container technologies (Docker, Kubernetes) Data warehousing concepts and dimensional modeling Experience with modern data architecture patterns Real-time and batch data processing architectures Data governance, lineage, and quality frameworks Business intelligence and visualization tools Machine learning pipeline integration Strong communication and presentation abilities Leadership and team collaboration skills Problem-solving and analytical thinking Customer-focused mindset with business acumen Preferred Qualifications: Master’s degree in relevant field Cloud certifications (AWS Solutions Architect, GCP Professional Data Engineer) Experience with multiple cloud platforms Knowledge of data privacy regulations (GDPR, CCPA) Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Additional Information: McDonald’s is committed to providing qualified individuals with disabilities with reasonable accommodations to perform the essential functions of their jobs. McDonald’s provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to sex, sex stereotyping, pregnancy (including pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), race, color, religion, ancestry or national origin, age, disability status, medical condition, marital status, sexual orientation, gender, gender identity, gender expression, transgender status, protected military or veteran status, citizenship status, genetic information, or any other characteristic protected by federal, state or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. McDonald’s Capability Center India Private Limited (“McDonald’s in India”) is a proud equal opportunity employer and is committed to hiring a diverse workforce and sustaining an inclusive culture. At McDonald’s in India, employment decisions are based on merit, job requirements, and business needs, and all qualified candidates are considered for employment. McDonald’s in India does not discriminate based on race, religion, colour, age, gender, marital status, nationality, ethnic origin, sexual orientation, political affiliation, veteran status, disability status, medical history, parental status, genetic information, or any other basis protected under state or local laws. Nothing in this job posting or description should be construed as an offer or guarantee of employment.
Posted 5 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Senior Manager, Integrated Test Lead – Data Product Engineering & Delivery (Sr Manager, Technology Testing) Lead comprehensive testing strategy and execution for complex data engineering pipelines and product delivery initiatives. Drive quality assurance across integrated systems, data workflows, and customer-facing applications while coordinating cross-functional testing efforts. Who we are looking for: Primary Responsibilities: Test Strategy & Leadership: Design and implement end-to-end testing frameworks for data pipelines, ETL / ELT processes, and analytics platforms Ensure test coverage across ETL / ELT, data transformation, lineage and consumption layers Develop integrated testing strategies spanning multiple systems, APIs, and data sources Establish testing standards, methodologies, and best practices across the organization Data Engineering Testing: Create comprehensive test suites for data ingestion, transformation, and output validation Design data quality checks, schema validation, and performance testing for large-scale datasets Implement automated testing for streaming and batch data processing workflows Validate data integrity across multiple environments and systems and against business rules Cross-Functional Coordination: Collaborate with data engineers, software developers, product managers, and DevOps teams Coordinate testing activities across multiple product streams and release cycles Manage testing dependencies and critical path items in complex delivery timelines Quality Assurance & Process Improvement: Establish metrics and KPIs for testing effectiveness and product quality to drive continuous improvement in testing processes and tooling Lead root cause analysis for production issues and testing gaps Technical Leadership: Mentor junior QA engineers and promote testing best practices Evaluate and implement new testing tools and technologies Design scalable testing infrastructure and CI/CD integration Skill: 10+ years in software testing with 3+ years in leadership roles 8+ year experience testing data engineering systems, ETL pipelines, or analytics platforms Proven track record with complex, multi-system integration testing Experience in agile/scrum environments with rapid delivery cycles Strong SQL experience with major databases (Redshift, Bigquery, etc.) Experience with cloud platforms (AWS, GCP) and their data services Knowledge of data pipeline tools (Apache Airflow, Kafka, Confluent, Spark, dbt, etc.) Proficiency in data warehousing, data architecture, reporting and analytics applications Scripting languages (Python, Java, bash) for test automation API testing tools and methodologies CI/CD/CT tools and practices Strong project management and organizational skills Excellent verbal and written communication abilities Experience managing multiple priorities and competing deadlines Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid.
Posted 5 days ago
3.0 - 6.0 years
40 - 50 Lacs
Bengaluru
Remote
Key Skills: Java, JSP, SQL, JavaScript, Spring, Redis, Redshift, Webpack, Linux, MySQL, Tomcat, Kotlin, CI/CD, Git, Agile, TDD, Communication Skills Roles and Responsibilities: Work independently and as a productive team member to ensure optimization of high-volume transactional enterprise websites Demonstrate a deep understanding of software layers and can enhance processes within the technology platform - Linux/Tomcat/MySQL/Java/Spring/JavaScript/Object-Relational Mapping/Direct SQL Develop internal software to automate and integrate business processes Work on multiple projects throughout the business and seamlessly transition from one project to the next Experience Requirement: 3-6 years of professional work experience Experience working independently on projects with minimal supervision Ability to manage multiple projects with competing priorities Proven ability to work in a collaborative group environment Hands-on experience in developing and optimizing scalable systems that handle high traffic (up to 30 million requests per day) Experience building internal tools to support business workflows Familiarity with Agile development methodology and iterative release cycles Experience in performance tuning SQL queries and optimizing back-end processing Exposure to cloud environments and distributed computing is a plus Experience debugging and troubleshooting complex application issues in production environments Education: Any Graduation.
Posted 5 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
This role is for one of Weekday's clients Min Experience: 7 years Location: Gurugram JobType: full-time Requirements We are looking for an experienced Data Engineer with deep expertise in Azure and/or AWS Databricks to join our growing data engineering team. As a Senior Data Engineer, you will be responsible for designing, building, and optimizing data pipelines, enabling seamless data integration and real-time analytics. This role is ideal for professionals who have hands-on experience with cloud-based data platforms, big data processing frameworks, and a strong understanding of data modeling, pipeline orchestration, and performance tuning. You will work closely with data scientists, analysts, and business stakeholders to deliver scalable and reliable data infrastructure that supports high-impact decision-making across the organization. Key Responsibilities: Design and Development of Data Pipelines: Design, develop, and maintain scalable and efficient data pipelines using Databricks on Azure or AWS. Integrate data from multiple sources including structured, semi-structured, and unstructured datasets. Implement ETL/ELT pipelines for both batch and real-time processing. Cloud Data Platform Expertise: Use Azure Data Factory, Azure Synapse, AWS Glue, S3, Lambda, or similar services to build robust and secure data workflows. Optimize storage, compute, and processing costs using appropriate services within the cloud environment. Data Modeling & Governance: Build and maintain enterprise-grade data models, schemas, and lakehouse architecture. Ensure adherence to data governance, security, and privacy standards, including data lineage and cataloging. Performance Tuning & Monitoring: Optimize data pipelines and query performance through partitioning, caching, indexing, and memory management. Implement monitoring tools and alerts to proactively address pipeline failures or performance degradation. Collaboration & Documentation: Work closely with data analysts, BI developers, and data scientists to understand data requirements. Document all processes, pipelines, and data flows for transparency, maintainability, and knowledge sharing. Automation and CI/CD: Develop and maintain CI/CD pipelines for automated deployment of data pipelines and infrastructure using tools like GitHub Actions, Azure DevOps, or Jenkins. Implement data quality checks and unit tests as part of the data development lifecycle. Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 7+ years of experience in data engineering roles with hands-on experience in Azure or AWS ecosystems. Strong expertise in Databricks (including notebooks, Delta Lake, and MLflow integration). Proficiency in Python and SQL; experience with PySpark or Spark strongly preferred. Experience with data lake architectures, data warehouse platforms (like Snowflake, Redshift, Synapse), and lakehouse principles. Familiarity with infrastructure as code (Terraform, ARM templates) is a plus. Strong analytical and problem-solving skills with attention to detail. Excellent verbal and written communication skills.
Posted 5 days ago
0 years
6 - 10 Lacs
Gurgaon
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Lead all phases of data engineering, including requirements analysis, data modeling, pipeline design, development, and testing Design and implement performance and operational enhancements for scalable data systems Develop reusable data components, frameworks, and patterns to accelerate team productivity and innovation Conduct code reviews and provide feedback aligned with data engineering best practices and performance optimization Ensure data solutions meet standards for quality, scalability, security, and maintainability through rigorous design and code reviews Actively participate in Agile/Scrum ceremonies to deliver high-quality data solutions Collaborate with software engineers, data analysts, and business stakeholders across Agile teams Troubleshoot and resolve production issues post-deployment, designing robust solutions as needed Design, develop, test, and document data pipelines and ETL processes, enhancing existing components to meet evolving business needs Partner with architecture teams to drive forward-thinking data platform solutions Contribute to the design and architecture of secure, scalable, and maintainable data systems, clearly communicating design decisions to technical leadership Mentor junior engineers and collaborate on solution design with team members and product owners Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree or equivalent experience Hands-on experience with cloud data services (AWS, Azure, or GCP) Experience building and maintaining ETL/ELT pipelines in enterprise environments Experience integrating with RESTful APIs Experience with Agile methodologies (Scrum, Kanban) Knowledge of data governance, security, privacy, and vulnerability management Understanding of authorization protocols (OAuth) and API integration Solid proficiency in SQL, NoSQL, and data modeling Proficiency with open-source tools such as Apache Flink, Iceberg, Spark, and PySpark Advanced Python skills for data engineering and data science (beyond Jupyter notebooks) Familiarity with big data technologies such as Spark, Hadoop, and Databricks Ability to build modular, testable, and reusable data solutions Solid grasp of data engineering concepts including: Data Catalogs Data Warehouses Data Lakes (especially Iceberg) Data Dictionaries Preferred Qualifications: Experience with GitHub, Terraform, and GitHub Actions Experience with real-time data streaming (Kafka, Kinesis) Experience with feature engineering and machine learning pipelines (MLOps) Knowledge of data warehousing platforms (Snowflake, Redshift, BigQuery) Familiarity with AWS native data engineering tools: Lambda, Lake Formation, Kinesis (Firehose, Data Streams) Glue (Data Catalog, ETL, Streaming) SageMaker, Athena, Redshift (including Spectrum) Demonstrated ability to mentor and guide junior engineers At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 5 days ago
7.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Job Description Job Description for Consultant - Data Engineer Key Responsibilities and Core Competencies: You will be responsible for managing and delivering multiple Pharma projects. Leading a team of atleast 8 members, resolving their technical and business related problems and other queries. Responsible for client interaction; requirements gathering, creating required documents, development, quality assurance of the deliverables. Good collaboration with onshore and Senior folks. Should have fair understanding of Data Capabilities (Data Management, Data Quality, Master and Reference Data). Exposure to Project management methodologies including Agile and Waterfall. Experience working in RFPs would be a plus. Required Technical Skills: Proficient in Python, Pyspark, SQL Extensive hands-on experience in big data processing and cloud technologies like AWS and Azure services, Databricks etc . Strong experience working with cloud data warehouses like Snowflake, Redshift, Azure etc. Good experience in ETL, Data Modelling, building ETL Pipelines. Conceptual knowledge of Relational database technologies, Data Lake, Lake Houses etc. Sound knowledge in Data operations, quality and data governance. Preferred Qualifications: Bachelor’s or master’s Engineering/ MCA or equivalent degree. 7-9 years of experience as Data Engineer , with atleast 2 years in managing medium to large scale programs. Minimum 5 years of Pharma and Life Science domain exposure in IQVIA, Veeva, Symphony, IMS etc. High motivation, good work ethic, maturity, self-organized and personal initiative. Ability to work collaboratively and providing the support to the team. Excellent written and verbal communication skills. Strong analytical and problem-solving skills. Location Preferably Hyderabad, India About Us Chryselys is a US based Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. Chryselys was founded in the heart of US Silicon Valley in November 2019 with the vision of delivering high-value business consulting, solutions, and services to clients in the healthcare and life sciences space. We are trusted partners for organizations that seek to achieve high-impact transformations and reach their higher-purpose mission. Chryselys India supports our global clients to achieve high-impact transformations and reach their higher-purpose mission. Please visit https://linkedin.com/company/chryselys/mycompany https://chryselys.com for more details.
Posted 5 days ago
5.0 - 7.0 years
0 Lacs
Hyderābād
On-site
Job Description – Sr. Data Engineer Roles & Responsibilities : We are looking for a Senior Data Engineering who will be majorly responsible for designing, building and maintaining ETL/ ELT pipelines . Integration of data from multiple sources or vendors to provide the holistic insights from data. You are expected to build and manage Data warehouse solutions, designing data models, creating ETL processes, implementing data quality mechanisms etc. Performs EDA (exploratory data analysis) required to troubleshoot data related issues and assist in the resolution of data issues. Should have experience in client interaction. Experience in mentoring juniors and providing required guidance. Required Technical Skills Extensive experience in Python, Pyspark, SQL . Strong experience in Data Warehouse, ETL, Data Modelling, building ETL Pipelines, Snowflake database. Must have strong hands-on experience in Azure and its services . Must be proficient in Databricks, Redshift, ADF etc. Hands-on experience in cloud services like Azure , AWS- S3, Glue, Lambda, CloudWatch, Athena. Sound knowledge in end-to-end Data management, data ops, quality and data governance. Knowledge of SFDC, Waterfall/ Agile methodology. Strong knowledge of Pharma domain / life sciences commercial data operations. Qualifications Bachelor’s or master’s Engineering/ MCA or equivalent degree. 5-7 years of relevant industry experience as Data Engineer . Experience working on Pharma syndicated data such as IQVIA, Veeva, Symphony; Claims, CRM, Sales etc. High motivation, good work ethic, maturity, self-organized and personal initiative. Ability to work collaboratively and providing the support to the team. Excellent written and verbal communication skills. Strong analytical and problem-solving skills.
Posted 5 days ago
1.5 - 2.0 years
0 Lacs
India
On-site
Qualification: Education: Bachelor’s degree in any field. Experience: Minimum 1.5-2 years of experience in data engineering support or a related role, with hands-on exposure to AWS. Technical Skills: Strong understanding of AWS services, including but not limited to S3, EC2, CloudWatch, and IAM. Proficiency in SQL with the ability to write, optimize, and debug queries for data analysis and issue resolution. Hands-on experience with Python for scripting and automation; familiarity with Shell scripting is a plus. Good understanding of ETL processes and data pipelines. Exposure to data warehousing concepts; experience with Amazon Redshift or similar platforms preferred. Working knowledge of orchestration tools, especially Apache Airflow – including monitoring and basic troubleshooting. Soft Skills: Strong communication and interpersonal skills for effective collaboration with cross-functional teams and multi-cultural teams. Problem-solving attitude with an eagerness to learn and adapt quickly. Willingness to work in a 24x7 support environment on a 6-day working schedule, with rotational shifts as required. Language Requirements: Must be able to read and write in English proficiently.
Posted 5 days ago
2.0 - 4.0 years
4 - 8 Lacs
Chennai
On-site
Job Description: As a Sr. Associate, you will work closely with internal and external stakeholders and deliver high quality analytics solutions to real-world Pharma commercial organization’s business problems. You will bring deep Pharma / Healthcare domain expertise and use cloud data tools to help solve complex problems Key Responsibilities: Collaborate with internal teams and client stakeholders to deliver Business Intelligence solutions that support key decision-making for the Commercial function of Pharma organizations. Leverage deep domain knowledge of pharmaceutical sales, claims, and secondary data to structure and optimize BI reporting frameworks. Develop, maintain, and optimize interactive dashboards and visualizations using BI tools like Power BI and Qlik, to enable data-driven insights. Translate business requirements into effective data visualizations and actionable reporting solutions tailored to end-user needs. Write complex SQL queries and work with large datasets housed in Data Lakes or Data Warehouses to extract, transform, and present data efficiently. Conduct data validation, QA checks, and troubleshoot stakeholder-reported issues by performing root cause analysis and implementing solutions. Collaborate with data engineering teams to define data models, KPIs, and automate data pipelines feeding BI tools. Manage ad-hoc and recurring reporting needs, ensuring accuracy, timeliness, and consistency of data outputs. Drive process improvements in dashboard development, data governance, and reporting workflows. Document dashboard specifications, data definitions, and maintain data dictionaries. Stay up to date with industry trends in BI tools, visualization of best practices and emerging data sources in the healthcare and pharma space. Prioritize and manage multiple BI project requests in a fast-paced, dynamic environment. Qualifications: 2–4 years of experience in BI development, reporting, or data visualization, preferably in the pharmaceutical or life sciences domain. Strong hands-on experience building dashboards using Power BI, and Qlik. Advanced SQL skills for querying and transforming data across complex data models. Familiarity with pharma data such as Sales, Claims, and secondary market data is a strong plus. Experience in data profiling, cleansing, and standardization techniques. Ability to translate business questions into effective visual analytics. Strong communication skills to interact with stakeholders and present data insights clearly. Self-driven, detail-oriented, and comfortable working with minimal supervision in a team-oriented environment. Exposure to data warehousing concepts and cloud data platforms (e.g., Snowflake, Redshift, or BigQuery) is an advantage. Education Bachelor’s or Master’s Degree (computer science, engineering or other technical disciplines)
Posted 5 days ago
2.0 years
5 - 8 Lacs
Bengaluru
On-site
- 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) - Experience with scripting language (e.g., Python, Java, or R) We are looking to hire an insightful, results-oriented Business Intelligence Engineer to produce and drive analyses for Worldwide Operations Security (WWOS) Team in Amazon. To keep our operations network secure and assure operational continuity, we are seeking an experienced professional who wants to join our Business Insights team. This role involves translating broad business problems into specific analytics projects, conducting deep quantitative analyses, and communicating results effectively. Key job responsibilities • Design and implement scalable data infrastructure solutions • Create and maintain data pipelines for metric tracking and reporting • Develop analytical models to identify Theft/Fraud trends and patterns • Partner with stakeholders to translate business needs into analytical solutions • Build and maintain data visualization dashboards for operational insights A day in the life As a Business Intelligence Engineer – I, you will collaborate with cross-functional teams to design and implement data solutions that drive business decisions. Your day might include analysing Theft & Fraud patterns, building automated reporting systems, or presenting insights to stakeholders. You'll work with petabyte-scale data sets and have the opportunity to influence strategic decisions through your analysis. About the team We are part of the Business Insights team under the Strategy vertical in Worldwide Operations Security, focusing on data analytics to support security and loss prevention initiatives. Our team collaborates across global operations to develop innovative solutions that protect Amazon's assets and contribute to business profitability. We leverage technology to identify patterns, prevent losses, and strengthen our operational network. Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 5 days ago
80.0 years
0 Lacs
Bengaluru
On-site
Job Description For more than 80 years, Kaplan has been a trailblazer in education and professional advancement. We are a global company at the intersection of education and technology, focused on collaboration, innovation, and creativity to deliver a best in class educational experience and make Kaplan a great place to work. Our offices in India opened in Bengaluru in 2018. Since then, our team has fueled growth and innovation across the organization, impacting students worldwide. We are eager to grow and expand with skilled professionals like you who use their talent to build solutions, enable effective learning, and improve students’ lives. The future of education is here and we are eager to work alongside those who want to make a positive impact and inspire change in the world around them. The Associate Data Engineer at Kaplan North America (KNA) within the Analytics division will work with world class psychometricians, data scientists and business analysts to forever change the face of education. This role is a hands-on technical expert who will help implement an Enterprise Data Warehouse powered by AWS RA3 as a key feature of our Lake House architecture. The perfect candidate possesses strong technical knowledge in data engineering, data observability, Infrastructure automation, data ops methodology, systems architecture, and development. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in a fast-paced environment understanding the business requirements and implementing data & reporting solutions. Above all you should be passionate about working with big data and someone who loves to bring datasets together to answer business questions and drive change Responsibilities You design, implement, and deploy data solutions. You solve difficult problems generating positive feedback. Build different types of data warehousing layers based on specific use cases Lead the design, implementation, and successful delivery of large-scale, critical, or difficult data solutions involving a significant amount of work Build scalable data infrastructure and understand distributed systems concepts from a data storage and compute perspective Utilize expertise in SQL and have a strong understanding of ETL and data modeling Ensure the accuracy and availability of data to customers and understand how technical decisions can impact their business’s analytics and reporting Be proficient in at least one scripting/programming language to handle large volume data processing. 30-day notification period preferred Requirements: In-depth knowledge of the AWS stack (RA3, Redshift, Lambda, Glue, SnS). Experience in data modeling, ETL development and data warehousing. Effective troubleshooting and problem-solving skills Strong customer focus, ownership, urgency and drive. Excellent verbal and written communication skills and the ability to work well in a team Preferred Qualification: Proficiency with Airflow, Tableau & SSRS #LI-NJ1 Location Bangalore, KA, India Additional Locations Employee Type Employee Job Functional Area Systems Administration/Engineering Business Unit 00091 Kaplan Higher ED At Kaplan, we recognize the importance of attracting and retaining top talent to drive our success in a competitive market. Our salary structure and compensation philosophy reflect the value we place on the experience, education, and skills that our employees bring to the organization, taking into consideration labor market trends and total rewards. All positions with Kaplan are paid at least $15 per hour or $31,200 per year for full-time positions. Additionally, certain positions are bonus or commission-eligible. And we have a comprehensive benefits package, learn more about our benefits here . Diversity & Inclusion Statement: Kaplan is committed to cultivating an inclusive workplace that values diversity, promotes equity, and integrates inclusivity into all aspects of our operations. We are an equal opportunity employer and all qualified applicants will receive consideration for employment regardless of age, race, creed, color, national origin, ancestry, marital status, sexual orientation, gender identity or expression, disability, veteran status, nationality, or sex. We believe that diversity strengthens our organization, fuels innovation, and improves our ability to serve our students, customers, and communities. Learn more about our culture here . Kaplan considers qualified applicants for employment even if applicants have an arrest or conviction in their background check records. Kaplan complies with related background check regulations, including but not limited to, the Los Angeles County Fair Chance Ordinance for Employers and the California Fair Chance Act. There are various positions where certain convictions may disqualify applicants, such as those positions requiring interaction with minors, financial records, or other sensitive and/or confidential information. Kaplan is a drug-free workplace and complies with applicable laws.
Posted 5 days ago
0 years
6 - 9 Lacs
Bengaluru
On-site
Ready to build the future with AI? At Genpact, we don’t just keep up with technology—we set the pace. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what’s possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Manager , Data Engineer ! In this role, we are seeking a Senior Data Engineer with deep expertise in AWS Redshift and SQL, strong project leadership skills, and a passion for driving data initiatives end-to-end. This role is ideal for a self-motivated professional who thrives in a fast-paced environment and is looking to play a key role in designing, building, and optimizing data pipelines for analytics and decision-making. Responsibilities Lead and manage end-to-end data engineering projects, collaborating with cross-functional teams including analytics, product, and engineering. Design and maintain scalable ETL/ELT pipelines using Redshift, SQL, and AWS services (e.g., S3, Glue, Lambda). Optimize Redshift clusters and SQL queries for performance and cost-efficiency. Serve as the domain expert for data modeling , architecture, and warehousing best practices. Proactively identify and resolve bottlenecks and data quality issues. Mentor junior engineers and enforce coding and architectural standards. Own the data lifecycle: from ingestion and transformation to validation and delivery for reporting. Qualifications we seek in you! Minimum Qualifications Good years of experience in data engineering or a related field. Proven expertise in AWS Redshift, advanced SQL, and modern data pipeline tools. Hands-on experience with data lakes, data warehousing, and distributed systems. Strong understanding of data governance, security, and performance tuning. Demonstrated ability to lead projects independently and drive them to completion. Excellent problem-solving, communication, and stakeholder management skills. Preferred Qualifications/ Skills Experience with Python, PySpark , and orchestration tools (e.g., Airflow). Familiarity with BI tools (e.g., Tableau, QuickSight ). AWS certifications or relevant credentials are a plus Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career —Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Manager Primary Location India-Bangalore Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 28, 2025, 2:42:30 AM Unposting Date Ongoing Master Skills List Operations Job Category Full Time
Posted 5 days ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Workmode: Hybrid work location : PAN INDIA Work Timing : 2 Pm to 11 PM Primary Skill : Data Engineer Experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark.. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Also, experience in Redshift is required along with other SQL DB experience Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. understanding of building an end-to end Data pipeline. Secondary Skills Strong understanding of Kinesis, Kafka, CDK. Experience with Kafka and ECS is also required. strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required Experience in Node Js and CDK. JDResponsibilities Lead the architectural design and development of a scalable, reliable, and flexible metadata-driven data ingestion and extraction framework on AWS using Python/PySpark. Design and implement a customizable data processing framework using Python/PySpark. This framework should be capable of handling diverse scenarios and evolving data processing requirements. Implement data pipeline for data Ingestion, transformation and extraction leveraging the AWS Cloud Services Seamlessly integrate a variety of AWS services, including S3,Glue, Kafka, Lambda, SQL, SNS, Athena, EC2, RDS (Oracle, Postgres, MySQL), AWS Crawler to construct a highly scalable and reliable data ingestion and extraction pipeline. Facilitate configuration and extensibility of the framework to adapt to evolving data needs and processing scenarios. Develop and maintain rigorous data quality checks and validation processes to safeguard the integrity of ingested data. Implement robust error handling, logging, monitoring, and alerting mechanisms to ensure the reliability of the entire data pipeline. Qualifications Must Have Over 6 years of hands-on experience in data engineering, with a proven focus on data ingestion and extraction using Python/PySpark. Extensive AWS experience is mandatory, with proficiency in Glue, Lambda, SQS, SNS, AWS IAM, AWS Step Functions, S3, and RDS (Oracle, Aurora Postgres). 4+ years of experience working with both relational and non-relational/NoSQL databases is required. Strong SQL experience is necessary, demonstrating the ability to write complex queries from scratch. Strong working experience in Redshift is required along with other SQL DB experience. Strong scripting experience with the ability to build intricate data pipelines using AWS serverless architecture. Complete understanding of building an end-to end Data pipeline. Nice to have Strong understanding of Kinesis, Kafka, CDK. A strong understanding of data concepts related to data warehousing, business intelligence (BI), data security, data quality, and data profiling is required. Experience in Node Js and CDK.
Posted 5 days ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Line of Service Advisory Industry/Sector FS X-Sector Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. *Why PWC At PwC , you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC , we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Responsibilities: Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS: Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. - Azure: Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 3-8 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS, Azure) is a plus. - Familiarity with Spark Streaming is a bonus. Mandatory skill sets: · Pl/Sql Developer Preferred skill sets: · Pl/Sql Developer Years of experience required: 7+ Education qualification: BE/BTech/MBA/MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Business Analyzer Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Creativity, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Embracing Change, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date
Posted 5 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Description Come build the future of smart security with us. Are you interested in helping shape the future of devices and services designed to keep people close to what’s important? About Ring Since its founding in 2013, Ring has been on a mission to make neighborhoods safer. From the first-ever video doorbell, to the award-winning DIY Ring Alarm system, Ring’s smart home security product line, as well as the Neighbors app, offer users affordable whole-home and neighborhood security. At Ring, we are committed to making home and neighborhood security accessible and effective for everyone -- while working hard to bring communities together. Ring is an Amazon company. For more information, visit www.ring.com. With Ring, you’re always home. Ring is looking for an insightful and analytical Business Intelligence Engineer with strong business and technical skills to join our Business Intelligence team. In this role, you will partner with product management, engineering, quality assurance and other BI teams that power Ring. Your work will be instrumental to achieving its mission, be highly visible to Ring / Amazon leadership, and will drive key strategic company goals. The Business Intelligence Engineer, Ring Decision Sciences Platform BI team will develop models and tools, conduct statistical analyses, evaluate large data sets, and create tailored models and dashboards. Additionally, you will be instrumental in the creation of a reliable and scalable infrastructure for ongoing reporting and analytics. You will be structuring ambiguous problems and designing analytics across various disciplines, resulting in actionable recommendations ranging from strategic planning, product strategy/launches, and engineering improvements to marketing campaign optimization, customer servicing trending, and competitive research. Key job responsibilities Enable decision-making by retrieving and aggregating data from multiple sources to present it in a digestible and actionable format Work with the ios and Android development and product teams to identify gaps and trends. Analyze large data sets using a variety of database query and visualization tools Provide technical expertise in extracting, integrating, and analyzing critical data Anticipate, identify, structure, and solve critical problems Design and develop key performance metrics and indicators using standardized and custom reports Perform ad hoc analysis to quickly solve time sensitive operational issues and business cases. Clearly communicate any potential data discrepancies and/or reporting downtime, including specific root cause, steps to resolution, and resolution date to a large end-user base Partner with subject matter experts to document and translate business requirements into technical requirements Manage multiple projects and proactively communicates issues, priorities, and objectives Clearly communicate any potential data discrepancies and/or reporting downtime, including specific root cause, steps to resolution, and resolution date to a large end-user base Partner with BI architects to provide valuable inputs to remodel the existing data warehouse. A day in the life As you lead the Business Intelligence Engineering (BIE) efforts for an upcoming device launch, your day involves collaborating closely with Product, Program, Firmware, and various other engineering teams. Having already identified the key signals to analyze device performance based on feature sets, you spend time aligning on proper definitions to analyze these features, meeting with engineering teams to instrument appropriate signals, maintaining data pipelines, and refining comprehensive dashboards. Throughout the day, you monitor platform performance while advancing initiatives to improve Ring's analytics through AI workflow implementation. About The Team The Ring Decision Sciences Platform is responsible for the data strategy, architecture, governance, science, and software services Ring teams use to inform business strategy or power experiences with data. The central Data Science and Analytics team (within Decision Sciences and the team where this role is based) is responsible for core business metrics, shared data models, AI/ML models, business intelligence dashboards, and business analysis/science support. Basic Qualifications 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience in Statistical Analysis packages such as R, SAS and Matlab Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling Preferred Qualifications Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI - Karnataka Job ID: A3025888
Posted 5 days ago
10.0 years
0 Lacs
India
On-site
The Impact - The Director Data Engineering will lead the development and implementation of a comprehensive data strategy that aligns with the organization’s business goals and enables data driven decision making. You will : Build and manage a team of talented data managers and engineers with the ability to not only keep up with, but also pioneer, in this space Collaborate with and influence leadership to directly impact company strategy and direction Develop new techniques and data pipelines that will enable various insights for internal and external customers Develop deep partnerships with client implementation teams, engineering and product teams to deliver on major cross-functional measurements and testing Communicate effectively to all levels of the organization, including executives Provide success in partnering teams with dramatically varying backgrounds, from the highly technical to the highly creative Design a data engineering roadmap and execute the vision behind it Hire, lead, and mentor a world-class data team Partner with other business areas to co-author and co-drive strategies on our shared roadmap Oversee the movement of large amounts of data into our data lake Establish a customer-centric approach and synthesize customer needs Own end-to-end pipelines and destinations for the transfer and storage of all data Manage 3rd-party resources and critical data integration vendors Promote a culture that drives autonomy, responsibility, perfection and mastery. Maintain and optimize software and cloud expenses to meet financial goals of the company Provide technical leadership to the team in design and architecture of data products and drive change across process, practices, and technology within the organization Work with engineering managers and functional leads to set direction and ambitious goals for the Engineering department Ensure data quality, security, and accessibility across the organization About you: 10+ years of experience in data engineering 5+ years of experience leading data teams of 30+ resources or more, including selection of talent planning / allocating resources across multiple geographies and functions. 5+ years of experience with GCP tools and technologies, specifically, Google BigQuery, Google cloud composer, Dataflow, Dataform, etc. Experience creating large-scale data engineering pipelines, data-based decision-making and quantitative analysis tools and software Experience with hands-on to code version control systems (git) Experience with CICD, data architectures, pipelines, quality, and code management Experience with complex, high volume, multi-dimensional data, based on unstructured, structured, and streaming datasets Experience with SQL and NoSQL databases Experience creating, testing, and supporting production software and systems Proven track record of identifying and resolving performance bottlenecks for production systems Experience designing and developing data lake, data warehouse, ETL and task orchestrating systems Strong leadership, communication, time management and interpersonal skills Proven architectural skills in data engineering Experience leading teams developing production-grade data pipelines on large datasets Experience designing a large data lake and lake house experience, managing data flows that integrate information from various sources into a common pool implementing data pipelines based on the ETL model Experience with common data languages (e.g. Python, Scala) and data warehouses (e.g. Redshift, BigQuery, Snowflake, Databricks) Extensive experience on cloud tools and technologies - GCP preferred Experience managing real-time data pipelines Successful track record and demonstrated thought-leadership and cross-functional influence and partnership within an agile / water-fall development environment. Experience in regulated industries or with compliance frameworks (e.g., SOC 2, ISO 27001). Write to sanish@careerxperts.com to get connected !
Posted 5 days ago
7.0 - 11.0 years
0 Lacs
India
Remote
JD: AWS Data Engineer Exp Range: 7 to 11 Years Location: Remote Shift Timings: 12 PM to 9 PM Primary Skills: Python, Pyspark, SQL, AWS JD Responsibilities Data Architecture: Develop and maintain the overall data architecture, ensuring scalability, performance, and data quality. AWS Data Services: Expertise in using AWS data services such as AWS Glue, S3, SNS, SES, Dynamo DB, Redshift, Cloud formation, Cloud watch, IAM, DMS, Event bridge scheduler etc. Data Warehousing: Design and implement data warehouses on AWS, leveraging AWS Redshift or other suitable options. Data Lakes: Build and manage data lakes on AWS using AWS S3 and other relevant services. Data Pipelines: Design and develop efficient data pipelines to extract, transform, and load data from various sources. Data Quality: Implement data quality frameworks and best practices to ensure data accuracy, completeness, and consistency. Cloud Optimization: Optimize data engineering solutions for performance, cost-efficiency, and scalability on the AWS cloud. Team Leadership: Mentor and guide data engineers, ensuring they adhere to best practices and meet project deadlines. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. 6-7 years of experience in data engineering roles, with a focus on AWS cloud platforms. Strong understanding of data warehousing and data lake concepts. Proficiency in SQL and at least one programming language (Python/Pyspark). Good to have - Experience with any big data technologies like Hadoop, Spark, and Kafka. Knowledge of data modeling and data quality best practices. Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team. Preferred Qualifications Certifications in AWS Certified Data Analytics - Specialty or AWS Certified Solutions Architect - Data. If Intrested. Please submit your CV to Khushboo@Sourcebae.com or share it via WhatsApp at 8827565832 khuStay updated with our latest job opportunities and company news by following us on LinkedIn: :https://www.linkedin.com/company/sourcebae
Posted 5 days ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Come help Amazon create cutting-edge data and science-driven technologies for delivering packages to the doorstep of our customers! The Last Mile Routing & Planning organization builds the software, algorithms and tools that make the “magic” of home delivery happen: our flow, sort, dispatch and routing intelligence systems are responsible for the billions of daily decisions needed to plan and execute safe, efficient and frustration-free routes for drivers around the world. Our team supports deliveries (and pickups!) for Amazon Logistics, Prime Now, Amazon Flex, Amazon Fresh, Lockers, and other new initiatives. As part of the Last Mile Science & Technology organization, you’ll partner closely with Product Managers, Data Scientists, and Software Engineers to drive improvements in Amazon's Last Mile delivery network. You will leverage data and analytics to generate insights that accelerate the scale, efficiency, and quality of the routes we build for our drivers through our end-to-end last mile planning systems. You will present your analyses, plans, and recommendations to senior leadership and connect new ideas to drive change. Analytical ingenuity and leadership, business acumen, effective communication capabilities, and the ability to work effectively with cross-functional teams in a fast paced environment are critical skills for this role. Responsibilities Create actionable business insights through analytical and statistical rigor to answer business questions, drive business decisions, and develop recommendations to improve operations Collaborate with Product Managers, software engineering, data science, and data engineering partners to design and develop analytic capabilities Define and govern key business metrics, build automated dashboards and analytic self-service capabilities, and engineer data-driven processes that drive business value Navigate ambiguity to develop analytic solutions and shape work for junior team members Basic Qualifications 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience Experience with data visualization using Tableau, Quicksight, or similar tools Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) Experience with scripting language (e.g., Python, Java, or R) Preferred Qualifications Master's degree, or Advanced technical degree Knowledge of data modeling and data pipeline design Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2902015
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough