Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - AWS Cloud Native – Full Stack Engineer Role Type - Full Time The opportunity We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Digital Engineering Team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including Asset management, Banking and Capital Markets, Insurance and Private Equity, Health, Government, Power and Utilities, we provide integrated advisory, assurance, tax, and transaction services. Through diverse experiences, world-class learning and individually tailored coaching you will experience ongoing professional development. That’s how we develop outstanding leaders who team to deliver on our promises to all of our stakeholders, and in so doing, play a critical role in building a better working world for our people, for our clients and for our communities. Sound interesting? Well, this is just the beginning. Because whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. We’re looking for an AWS Cloud Native – Full Stack Engineer at EY GDS. You will work on designing and implementing cloud-native applications and services using AWS technologies. You will collaborate with development teams to build, deploy, and manage applications that meet business needs and leverage AWS best practices. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. We are the only professional services organization who has a separate business dedicated exclusively to the financial and non-financial services marketplace. Join Digital Engineering team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. EY Digital Engineering is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge. The Digital Engineering (DE) practice works with clients to analyse, formulate, design, mobilize and drive digital transformation initiatives. We advise clients on their most pressing digital challenges and opportunities surround business strategy, customer, growth, profit optimization, innovation, technology strategy, and digital transformation. We also have a unique ability to help our clients translate strategy into actionable technical design, and transformation planning/mobilization. Through our unique combination of competencies and solutions, EY’s DE team helps our clients sustain competitive advantage and profitability by developing strategies to stay ahead of the rapid pace of change and disruption and supporting the execution of complex transformations. Your Key Responsibilities Application Development: Design and develop cloud-native applications and services using Angular/React/Typescript, Java Springboot /Node, AWS services such as Lambda, API Gateway, ECS, EKS, and DynamoDB, Glue, Redshift, EMR. Deployment and Automation: Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy to automate application deployment and updates. Architecture Design: Collaborate with architects and other engineers to design scalable and secure application architectures on AWS. Performance Tuning: Monitor application performance and implement optimizations to enhance reliability, scalability, and efficiency. Security: Implement security best practices for AWS applications, including identity and access management (IAM), encryption, and secure coding practices. Container Services Management: Design and deploy containerized applications using AWS services such as Amazon ECS (Elastic Container Service), Amazon EKS (Elastic Kubernetes Service), and AWS Fargate. Configure and manage container orchestration, scaling, and deployment strategies. Optimize container performance and resource utilization by tuning settings and configurations. Application Observability: Implement and manage application observability tools such as AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack (Elasticsearch, Logstash, Kibana). Develop and configure monitoring, logging, and alerting systems to provide insights into application performance and health. Create dashboards and reports to visualize application metrics and logs for proactive monitoring and troubleshooting. Integration: Integrate AWS services with application components and external systems, ensuring smooth and efficient data flow. Troubleshooting: Diagnose and resolve issues related to application performance, availability, and reliability. Documentation: Create and maintain comprehensive documentation for application design, deployment processes, and configuration. Skills And Attributes For Success Required Skills: AWS Services: Proficiency in AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Glue, Redshift, EMR. Programming: Strong programming skills in languages such as Python, Java, or Node.js, Angular/React/Typescript. CI/CD: Experience with CI/CD tools and practices, including AWS CodePipeline, CodeBuild, and CodeDeploy. Infrastructure as Code: Familiarity with IaC tools like AWS CloudFormation or Terraform for automating application infrastructure. Security: Understanding of AWS security best practices, including IAM, KMS, and encryption. Observability Tools: Proficiency in using observability tools like AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack. Container Orchestration: Knowledge of container orchestration concepts and tools, including Kubernetes and Docker Swarm. Monitoring: Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or ELK Stack. Collaboration: Strong teamwork and communication skills with the ability to work effectively with cross-functional teams. Preferred Qualifications: Certifications: AWS Certified Solutions Architect – Associate or Professional, AWS Certified Developer – Associate, or similar certifications. Experience: 2-3 Years previous experience in an application engineering role with a focus on AWS technologies. Agile Methodologies: Familiarity with Agile development practices and methodologies. Problem-Solving: Strong analytical skills with the ability to troubleshoot and resolve complex issues. Education : Degree : Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent practical experience What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Ultimate.ai Data Science Pune, Maharashtra, India Posted on May 30, 2025 Apply now Job Description Note***: This is a hybrid role, combining remote and on-site work, requiring 3 days in the office, and relocation to Pune. Our Enterprise Data & Analytics (EDA) is seeking an experienced Senior Data Platform Engineer to join our growing Platform engineering team. You’ll work in a collaborative Agile environment using the latest engineering best practices and be involved in all aspects of the software development lifecycle. As a Senior Data Platform Engineer, you will be responsible for building and maintaining many key parts of the Zendesk Data Platform including next generation reporting and analytics. Working closely with your team members to craft, develop and deliver reporting products for our customers and high quality software projects on time. Data is at the heart of Zendesk’s business! This is an autonomous role that can have a huge impact across all of the Zendesk product family! What You Get To Do Every Single Day Design, develop and maintain scalable and efficient data infrastructure components, including data pipelines, storage solutions and data processing frameworks Build and manage integrations with various internal and external data sources via ETL solutions. Design, implement, and maintain CI/CD pipelines using DevOps tools like Terraform & Github Actions for automated build, test, and deployment processes Build high-quality, clean, scalable and reusable code by enforcing best practices around software engineering architecture and processes (Code Reviews, Unit testing, etc.) Collaborate with team members on researching and brainstorming different solutions for technical challenges we face Continually improve data pipelines for high efficiency, throughput and quality of data Investigate production issues and fine-tune our data pipelines Build and Promote best engineering practices in areas of version control system, CI/CD, code review, pair programming Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Stay up-to-date with the latest technologies and industry trends, and proactively recommend improvements to our data platform. Basic Qualifications What you bring to the role: 4+ years of data engineering experience building, working & maintaining scalable data infrastructure (data pipelines & ETL processes on big data environments) 2+ years of experience with Cloud columnar databases (Snowflake, Google BigQuery, Amazon Redshift) Proven experience as a CI/CD Engineer or DevOps Engineer, with a focus on data platforms and analytics (Terraform, Docker, Github Actions) Experience with Cloud Platform (AWS, Google Cloud) Proficiency in query authoring (SQL) and data processing (batch and streaming) Intermediate experience with any of the programming language: Python, Go, Java, Scala, we use primarily Python Experience with ETL schedulers such as Apache Airflow, AWS Glue or similar frameworks Developer skills; demonstrating a strong passion to design scalable and fault-tolerant software systems Integration with 3rd party API SaaS applications like Salesforce, Zuora, etc Ensure data integrity and accuracy by conducting regular data audits, identifying and resolving data quality issues, and implementing data governance best practices. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Ability to work closely with data scientists, analysts, and other stakeholders to translate business requirements into technical solutions. Strong documentation skills for pipeline design and data flow diagrams. Preferred Qualifications Extensive experience with Snowflake or similar cloud warehouses (Bigquery, Redshift) Familiarity with infrastructure as code principles and tools (e.g., Terraform, CloudFormation, Github Actions) Experience with version control systems (e.g., Git) and CI/CD best practices for software development to bring automation Expert knowledge in python Familiarity with Airflow, Fivetran, Hightouch Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The Intelligent Heart Of Customer Experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request. Apply now See more open positions at Ultimate.ai Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Ultimate.ai Data Science Pune, Maharashtra, India Posted on May 30, 2025 Apply now Job Description Note***: This is a hybrid role, combining remote and on-site work, requiring 3 days in the office, and relocation to Pune. Our Enterprise Data & Analytics (EDA) is looking for an experienced Senior Data Engineer to join our growing data engineering team. You’ll work in a collaborative Agile environment using the latest engineering best practices with involvement in all aspects of the software development lifecycle. You will craft and develop curated data products, applying standard architectural & data modeling practices to maintain the foundation data layer serving as a single source of truth across Zendesk . You will be primarily developing Data Warehouse Solutions in BigQuery/Snowflake using technologies such as dbt, Airflow, Terraform. What You Get To Do Every Single Day Collaborate with team members and business partners to collect business requirements, define successful analytics outcomes and design data models Serve as Data Model subject matter expert and data model spokesperson, demonstrated by the ability to address questions quickly and accurately Implement Enterprise Data Warehouse by transforming raw data into schemas and data models for various business domains using SQL & dbt Design, build, and maintain ELT pipelines in Enterprise Data Warehouse to ensure reliable business reporting using Airflow, Fivetran & dbt Optimize data warehousing processes by refining naming conventions, enhancing data modeling, and implementing best practices for data quali ty testing Build analytics solutions that provide practical insights into customer 360, finance, product, sales and other key business domains Build and Promote best engineering practices in areas of version control system, CI/CD, code review, pair programming Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Work with data and analytics experts to strive for greater functionality in our data systems Basic Qualifications What you bring to the role: 5+ years of data engineering experience building, working & maintaining data pipelines & ETL processes on big data environments 5+ years of experience in Data Modeling and Data Architecture in a production environment 5+ years in writing complex SQL queries 5+ years of experience with Cloud columnar databases (Snowflake, Google BigQuery, Amazon Redshift) 2+ years of production experience working with dbt and designing and implementing Data Warehouse solutions Ability to work closely with data scientists, analysts, and other stakeholders to translate business requirements into technical solutions. Strong documentation skills for pipeline design and data flow diagrams. Intermediate experience with any of the programming language: Python, Go, Java, Scala, we primarily use Python Integration with 3rd party API SaaS applications like Salesforce, Zuora, etc Ensure data integrity and accuracy by conducting regular data audits, identifying and resolving data quality issues, and implementing data governance best practices. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Preferred Qualifications Hands-on experience with Snowflake data platform, including administration, SQL scripting, and query performance tuning Good Knowledge in modern as well as classic Data Modeling - Kimball, Innmon, etc Demonstrated experience in one or many business domains (Finance, Sales, Marketing) 3+ completed projects with dbt Expert knowledge in python Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The Intelligent Heart Of Customer Experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request. Apply now See more open positions at Ultimate.ai Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Ultimate.ai Data Science Pune, Maharashtra, India Posted on May 30, 2025 Apply now Job Description Note***: This is a hybrid role, combining remote and on-site work, requiring 3 days in the office, and relocation to Pune. Our Enterprise Data & Analytics (EDA) is seeking an experienced Senior Data Engineer to join our growing data engineering team. You’ll work in a collaborative Agile environment using the latest engineering best practices and be involved in all aspects of the software development lifecycle. You will craft and develop curated data products, applying standard architectural & data modeling practices to maintain the foundation data layer serving as a single source of truth across Zendesk. You will be primarily developing Data Warehouse Solutions in BigQuery/Snowflake using technologies such as dbt, Airflow, and Terraform. What You Get To Do Every Single Day Collaborate with team members and business partners to collect business requirements, define successful analytics outcomes and design data models Serve as Data Model subject matter expert and data model spokesperson, demonstrated by the ability to address questions quickly and accurately Implement Enterprise Data Warehouse by transforming raw data into schemas and data models for various business domains using SQL & dbt Design, build, and maintain ELT pipelines in Enterprise Data Warehouse to ensure reliable business reporting using Airflow, Fivetran & dbt Optimize data warehousing processes by refining naming conventions, enhancing data modeling, and implementing best practices for data quality testing Build analytics solutions that provide practical insights into customer 360, finance, product, sales and other key business domains Build and Promote best engineering practices in areas of version control system, CI/CD, code review, pair programming Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery Work with data and analytics experts to strive for greater functionality in our data systems Basic Qualifications What you bring to the role: 5+ years of data engineering experience building, working & maintaining data pipelines & ETL processes on big data environments 5+ years of experience in Data Modeling and Data Architecture in a production environment 5+ years in writing complex SQL queries 5+ years of experience with Cloud columnar databases (Snowflake, Google BigQuery, Amazon Redshift) 2+ years of production experience working with dbt and designing and implementing Data Warehouse solutions Ability to work closely with data scientists, analysts, and other stakeholders to translate business requirements into technical solutions. Strong documentation skills for pipeline design and data flow diagrams. Intermediate experience with any of the programming language: Python, Go, Java, Scala, we primarily use Python Integration with 3rd party API SaaS applications like Salesforce, Zuora, etc Ensure data integrity and accuracy by conducting regular data audits, identifying and resolving data quality issues, and implementing data governance best practices. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Preferred Qualifications Hands-on experience with Snowflake data platform, including administration, SQL scripting, and query performance tuning Good Knowledge in modern as well as classic Data Modeling - Kimball, Innmon, etc Demonstrated experience in one or many business domains (Finance, Sales, Marketing) 3+ completed projects with dbt Expert knowledge in python Please note that Zendesk can only hire candidates who are physically located and plan to work from Karnataka or Maharashtra. Please refer to the location posted on the requisition for where this role is based. Hybrid: In this role, our hybrid experience is designed at the team level to give you a rich onsite experience packed with connection, collaboration, learning, and celebration - while also giving you flexibility to work remotely for part of the week. This role must attend our local office for part of the week. The specific in-office schedule is to be determined by the hiring manager. The Intelligent Heart Of Customer Experience Zendesk software was built to bring a sense of calm to the chaotic world of customer service. Today we power billions of conversations with brands you know and love. Zendesk believes in offering our people a fulfilling and inclusive experience. Our hybrid way of working, enables us to purposefully come together in person, at one of our many Zendesk offices around the world, to connect, collaborate and learn whilst also giving our people the flexibility to work remotely for part of the week. Zendesk is an equal opportunity employer, and we’re proud of our ongoing efforts to foster global diversity, equity, & inclusion in the workplace. Individuals seeking employment and employees at Zendesk are considered without regard to race, color, religion, national origin, age, sex, gender, gender identity, gender expression, sexual orientation, marital status, medical condition, ancestry, disability, military or veteran status, or any other characteristic protected by applicable law. We are an AA/EEO/Veterans/Disabled employer. If you are based in the United States and would like more information about your EEO rights under the law, please click here. Zendesk endeavors to make reasonable accommodations for applicants with disabilities and disabled veterans pursuant to applicable federal and state law. If you are an individual with a disability and require a reasonable accommodation to submit this application, complete any pre-employment testing, or otherwise participate in the employee selection process, please send an e-mail to peopleandplaces@zendesk.com with your specific accommodation request. Apply now See more open positions at Ultimate.ai Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Summary JOB DESCRIPTION We are seeking a talented and experienced Data Engineer to join our team. The ideal candidate will be responsible for designing, building, and maintaining scalable data pipelines and systems to support analytics and data-driven decision-making. This role requires expertise in data processing, data modeling, and big data technologies. Responsibilities Key Responsibilities: Design and develop datapipelines to collect, transform, and load data into datalakes and datawarehouses . Optimize ETLworkflows to ensure data accuracy, reliability, and scalability. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements. Implement and manage cloud − baseddataplatforms (e.g., AWS , Azure , or GoogleCloudPlatform ). Develop datamodels to support analytics and reporting. Monitor and troubleshoot data systems to ensure high performance and minimal downtime. Ensure data quality and security through governance best practices. Document workflows, processes, and architecture to facilitate collaboration and scalability. Stay updated with emerging data engineering technologies and trends. Qualifications Required Skills and Qualifications: Strong proficiency in SQL and Python for data processing and transformation. Hands-on experience with bigdatatechnologies like ApacheSpark , Hadoop , or Kafka . Knowledge of datawarehousingconcepts and tools such as Snowflake , BigQuery , or Redshift . Experience with workfloworchestrationtools like ApacheAirflow or Prefect . Familiarity with cloudplatforms (AWS, Azure, GCP) and their data services. Understanding of datagovernance , security , and compliance best practices. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Preferred Qualifications Certification in cloudplatforms (AWS, Azure, or GCP). Experience with NoSQLdatabases like MongoDB , Cassandra , or DynamoDB . Familiarity with DevOpspractices and tools like Docker , Kubernetes , and Terraform . Exposure to machinelearningpipelines and tools like MLflow or Kubeflow . Knowledge of datavisualizationtools like PowerBI , Tableau , or Looker . Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Position Summary The Software Engineer II, Aera DI role is accountable for developing data solutions and operations support of the Enterprise data lake. The role will be accountable for developing the pipelines for the data enablement projects, production/application support and enhancements, and support data operations activities. Additional responsibilities include data analysis, data operations process and tools, data cataloguing, and developing data SME skills in Global Product Development and Supply - Data and Analytics Enablement organization. Key Responsibilities The Software Engineer will be responsible for designing, building, and maintaining the data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs to support GPS Responsible for delivering high quality, data products and analytic ready data solutions Develop and maintain data models to support our reporting and analysis needs. Develop ad-hoc analytic solutions from solution design to testing, deployment, and full lifecycle management. Optimize data storage and retrieval to ensure efficient performance and scalability Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and integrity through data validation and testing Implement and maintain security protocols to protect sensitive data Stay up-to-date with emerging trends and technologies in data engineering and analytics Participate in the analysis, design, build, manage, and operate lifecycle of the enterprise data lake and analytics focused digital capabilities Develop cloud-based (AWS) data pipelines to facilitate data processing and analysis Build e-2-e data ETL pipelines from data integration -> data processing -> data integration -> visualization Proficient Python/node.js along with UI technologies like Reacts.js, Spark, SQL, AWS Redshift, AWS S3, Glue/Glue Studio, Athena, IAM, other Native AWS Service familiarity with Domino/data lake principles. Good to have any Knowledge on Neo4J, IAM, CFT & other Native AWS Service familiarity with data lake principles. Familiarity and experience with Cloud infrastructure management and work closely with the Cloud engineering team Participate in effort and cost estimations when required Partner with other data, platform, and cloud teams to identify opportunities for continuous improvements Architect and develop data solutions according to legal and company guidelines Assess system performance and recommend improvements Responsible for maintaining of data acquisition/operational focused capabilities including Data Catalog; User Access Request/Tracking; Data Use Request If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Position Summary The GPS Data & Analytics Software Engineer role is accountable for developing data solutions and operations support of the Enterprise data lake. The role will be accountable for developing the pipelines for the data enablement projects, production/application support and enhancements, and support data operations activities. Additional responsibilities include data analysis, data operations process and tools, data cataloguing, and developing data SME skills in Global Product Development and Supply - Data and Analytics Enablement organization. Key Responsibilities The Data Engineer will be responsible for designing, building, and maintaining the data products, evolution of the data products, and utilize the most suitable data architecture required for our organization's data needs to support GPS Responsible for delivering high quality, data products and analytic ready data solutions Develop and maintain data models to support our reporting and analysis needs. Develop ad-hoc analytic solutions from solution design to testing, deployment, and full lifecycle management. Optimize data storage and retrieval to ensure efficient performance and scalability Collaborate with data architects, data analysts and data scientists to understand their data needs and ensure that the data infrastructure supports their requirements Ensure data quality and integrity through data validation and testing Implement and maintain security protocols to protect sensitive data Stay up-to-date with emerging trends and technologies in data engineering and analytics Participate in the analysis, design, build, manage, and operate lifecycle of the enterprise data lake and analytics focused digital capabilities Develop cloud-based (AWS) data pipelines to facilitate data processing and analysis Build e-2-e data ETL pipelines from data integration -> data processing -> data integration -> visualization Proficient Python/node.js along with UI technologies like Reacts.js, Spark, SQL, AWS Redshift, AWS S3, Glue/Glue Studio, Athena, IAM, other Native AWS Service familiarity with Domino/data lake principles. Good to have any Knowledge on Neo4J, IAM, CFT & other Native AWS Service familiarity with data lake principles. Familiarity and experience with Cloud infrastructure management and work closely with the Cloud engineering team Participate in effort and cost estimations when required Partner with other data, platform, and cloud teams to identify opportunities for continuous improvements Architect and develop data solutions according to legal and company guidelines Assess system performance and recommend improvements Responsible for maintaining of data acquisition/operational focused capabilities including Data Catalog; User Access Request/Tracking; Data Use Request If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. Show more Show less
Posted 2 weeks ago
5.0 - 10.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Title: Data Architect / Delivery Lead Job Summary: The Data Architect / Delivery Lead will provide technical expertise in the analysis, design, development, rollout, and maintenance of enterprise data models and solutions, utilizing both traditional and emerging technologies such as cloud, Hadoop, NoSQL, and real-time data processing. In addition to technical expertise, the role requires leadership in driving cross-functional teams, ensuring seamless project delivery, and fostering innovation within the team. The candidate must excel in managing data architecture projects while mentoring teams in data engineering practices, including PySpark , automation, and big data integration. Essential Duties Data Architecture Design and Development: Design and develop conceptual, logical, and physical data models for enterprise-scale data lakes and data warehouse solutions, ensuring optimal performance and scalability. Implement real-time and batch data integration solutions using modern tools and technologies such as PySpark, Hadoop, and cloud-based solutions (e.g., AWS, Azure, Google Cloud). Utilize PySpark for distributed data processing, transforming and analyzing large datasets for improved data-driven decision-making. Understand and apply modern data architecture philosophies such as Data Vault, Dimensional Modeling, and Data Lake design for building scalable and sustainable data solutions. Leadership & Delivery Management: Provide leadership in data architecture and engineering projects, ensuring the integration of modern technologies and best practices in data management and transformation. Act as a trusted advisor, collaborating with business users, technical staff, and project managers to define requirements and deliver high-quality data solutions. Lead and mentor a team of data engineers, ensuring the effective application of PySpark for data engineering tasks, and supporting continuous learning and improvement within the team. Manage end-to-end delivery of data projects, including defining timelines, managing resources, and ensuring timely, high-quality delivery while adhering to project methodologies (e.g., Agile, Scrum). Data Movement & Integration: Provide expertise in data integration processes, including batch and real-time data processing using tools such as PySpark, Informatica PowerCenter, SSIS, MuleSoft, and DataStage. Develop and optimize ETL/ELT pipelines, utilizing PySpark for efficient data processing and transformation at scale, particularly for big data environments (e.g., Hadoop ecosystems). Oversee data migration efforts, ensuring high-quality and consistent data delivery while managing data transformation and cleansing processes. Documentation & Communication: Create comprehensive functional and technical documentation, including data integration architecture documentation, data models, data dictionaries, and testing plans. Collaborate with business stakeholders and technical teams to ensure alignment and provide technical guidance on data-related decisions. Prepare and present technical content and architectural decisions to senior management, ensuring clear communication of complex data concepts. Skills and Experience: Data Engineering Skills: Extensive experience in PySpark for large-scale data processing, data transformation, and working with distributed systems. Proficient in modern data processing frameworks and technologies, including Hadoop, Spark, and Flink. Expertise in cloud-based data engineering technologies and platforms such as AWS Glue, Azure Data Factory, or Google Cloud Dataflow. Strong experience with data pipelines, ETL/ELT frameworks, and automation techniques using tools like Airflow, Apache NiFi, or dbt. Expertise in working with big data technologies and frameworks for both structured and unstructured data. Data Architecture and Modeling: 5-10 years of experience in enterprise data modeling, including hands-on experience with ERwin, ER/Studio, PowerDesigner, or similar tools. Strong knowledge of relational databases (e.g., Oracle, SQL Server, Teradata) and NoSQL technologies (e.g., MongoDB, Cassandra). In-depth understanding of data warehousing and data integration best practices, including dimensional modeling and working with OLTP systems and OLAP cubes. Experience with real-time data architectures and cloud-based data lakes, leveraging AWS, Azure, or Google Cloud platforms. Leadership & Delivery Skills: 3-5 years of management experience leading teams of data engineers and architects, ensuring alignment of team goals with organizational objectives. Strong leadership qualities such as innovation, critical thinking, communication, time management, and the ability to collaborate effectively across teams and stakeholders. Proven ability to act as a delivery lead for data projects, driving projects from concept to completion while managing resources, timelines, and deliverables. Ability to mentor and coach team members in both technical and professional growth, fostering a culture of knowledge sharing and continuous improvement. Other Essential Skills: Strong knowledge of SQL, PL/SQL, and proficiency in scripting for data engineering tasks. Ability to translate business requirements into technical solutions, ensuring that the data solutions support business strategies and objectives. Hands-on experience with metadata management, data governance, and master data management (MDM) principles. Familiarity with modern agile methodologies, such as Scrum or Kanban, to ensure iterative and successful project delivery. Preferred Skills & Experience: Cloud Technologies: Experience with cloud data platforms such as AWS Redshift, Google BigQuery, or Azure Synapse for building scalable data solutions. Leadership: Demonstrated ability to build and lead cross-functional teams, drive innovation, and solve complex data problems. Business Consulting: Consulting experience working with clients to deliver tailored data solutions, providing expert guidance on data architecture and data management practices. Data Profiling and Analysis: Hands-on experience with data profiling tools and techniques to assess and improve the quality of enterprise data. Real-Time Data Processing: Experience in real-time data integration and streaming technologies, such as Kafka and Kinesis. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Intern- Data Solutions As an Intern- Data Solutions , you will be part of the Commercial Data Solutions team, providing technical/data expertise development of analytical data products to enable data science & analytics use cases. In this role, you will create and maintain data assets/domains used in the commercial/marketing analytics space – to develop best-in-class data pipelines and products, working closely with data product owners to translate data product requirements and user stories into development activities throughout all phases of design, planning, execution, testing, deployment and delivery. Your Specific Responsibilities Will Include Hands-on development of last-mile data products using the most up-to-date technologies and software / data / DevOps engineering practices Enable data science & analytics teams to drive data modeling and feature engineering activities aligned with business questions and utilizing datasets in an optimal way Develop deep domain expertise and business acumen to ensure that all specificalities and pitfalls of data sources are accounted for Build data products based on automated data models, aligned with use case requirements, and advise data scientists, analysts and visualization developers on how to use these data models Develop analytical data products for reusability, governance and compliance by design Align with organization strategy and implement semantic layer for analytics data products Support data stewards and other engineers in maintaining data catalogs, data quality measures and governance frameworks Education B.Tech / B.S., M.Tech / M.S. or PhD in Engineering, Computer Science, Engineering, Pharmaceuticals, Healthcare, Data Science, Business, or related field Required Experience High proficiency in SQL, Python and AWS Good understanding and comprehension of the requirements provided by Data Product Owner and Lead Analytics Engineer Understanding in creating / adopting data models to meet requirements from Marketing, Data Science, Visualization stakeholders Experience with including feature engineering Hands on with cloud-based (AWS / GCP / Azure) data management platforms and typical storage/compute services (Databricks, Snowflake, Redshift, etc.) Hands on with modern data stack tools such as Matillion, Starburst, ThoughtSpot and low-code tools (e.g. Dataiku) Excellent interpersonal and communication skills, with the ability to quickly establish productive working relationships with a variety of stakeholders Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Intern/Co-op (Fixed Term) Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Intelligence (BI), Data Management, Data Modeling, Data Visualization, Measurement Analysis, Stakeholder Relationship Management, Waterfall Model Preferred Skills Job Posting End Date 06/3/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R344331 Show more Show less
Posted 2 weeks ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Here is a combined Job Description (JD) for a Project & Data Manager (or Project Data Manager ) that merges project management responsibilities with technical data engineering expertise. This hybrid role is ideal for organizations needing someone to lead data-focused projects end-to-end, from strategic planning through to technical execution. Job Title: Project Data Manager Location: [Your Location] Department: Data & Analytics / Project Management Office Reports To: Head of Data Engineering / Director of Projects Job Summary: We are looking for an experienced and versatile Project Data Manager to lead data-centric projects, combining robust project management with deep technical data expertise. This hybrid role is responsible for overseeing the successful delivery of data initiatives—from understanding stakeholder needs to designing and deploying scalable data pipelines and analytics-ready infrastructure. You will ensure alignment between business goals, data strategy, and technical execution while maintaining high standards of data quality and operational efficiency. Key Responsibilities: Project Management: Manage the planning, execution, and delivery of data engineering projects, ensuring they are completed on time, within scope, and within budget. Define project scopes, objectives, timelines, resource plans, and risk mitigation strategies in coordination with stakeholders. Serve as the primary point of contact between technical teams, business units, and leadership. Track project milestones, deliverables, and KPIs using appropriate tools (e.g., Jira, Asana, MS Project). Facilitate project meetings, status updates, and stakeholder communications. Data Engineering & Technical Responsibilities: Design, develop, and maintain scalable data pipelines and ETL/ELT processes to ingest, transform, and store data from various structured and unstructured sources. Implement efficient data models and schemas to support business intelligence, reporting, and machine learning use cases. Ensure data accuracy and quality by building validation, monitoring, and error-handling mechanisms. Collaborate closely with analysts, data scientists, and business units to deliver clean, reliable, and accessible data. Optimize performance, scalability, and cost-effectiveness of data infrastructure and cloud-based platforms. Stay up to date with emerging trends in data engineering, cloud computing, and project management best practices. Required Qualifications: Bachelor's degree in Computer Science, Data Engineering, Information Systems, or a related field. Minimum 8 years of experience in data engineering or related technical roles, with at least 2 years of experience in a project management capacity. Proficient in SQL, Python, and data pipeline tools (e.g., Airflow, Kafka, Talend, Informatica). Hands-on experience with relational and NoSQL databases (e.g., Postgres, MongoDB, Oracle), data warehousing platforms (e.g., Snowflake, Redshift, BigQuery), and distributed computing frameworks (e.g., Spark, Flink). Familiarity with cloud services (AWS, Azure, GCP) and infrastructure-as-code concepts. Understanding of data governance, data security, and compliance best practices. Strong working knowledge of Agile, Scrum, or Waterfall methodologies. Excellent analytical thinking, problem-solving skills, and attention to detail. Effective verbal and written communication skills for cross-functional collaboration and stakeholder management. Preferred Qualifications: PMP, PRINCE2, or Agile/Scrum certification. Experience with data visualization platforms such as Power BI, Tableau, or Looker. Experience building CI/CD pipelines for data deployment and managing version control (e.g., Git). Prior experience in leading data transformation initiatives or enterprise-wide data modernization projects. Would you like this tailored further for a specific industry (e.g., healthcare, fintech, SaaS) or level (mid-senior, director, etc.)? Show more Show less
Posted 2 weeks ago
6.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Join Amgen's Mission to Serve Patients If you feel like you’re part of something bigger, it’s because you are. At Amgen, our shared mission—to serve patients—drives all that we do. It is key to our becoming one of the world’s leading biotechnology companies. We are global collaborators who achieve together—researching, manufacturing, and deliver ever-better products that reach over 10 million patients worldwide. It’s time for a career you can be proud of. Sr Associate Software Engineer Live What You Will Do Let’s do this. Let’s change the world. In this vital role We are seeking a highly skilled and hands-on Senior Software Engineer – Search to drive the development of intelligent, scalable search systems across our pharmaceutical organization. You'll work at the intersection of software engineering, AI, and life sciences to enable seamless access to structured and unstructured content—spanning research papers, clinical trial data, regulatory documents, and internal scientific knowledge. This is a high-impact role where your code directly accelerates innovation and decision-making in drug development and healthcare delivery Design, implement, and optimize search services using technologies such as Elasticsearch, OpenSearch, Solr, or vector search frameworks. Collaborate with data scientists and analysts to deliver data models and insights. Develop custom ranking algorithms, relevancy tuning, and semantic search capabilities tailored to scientific and medical content Support the development of intelligent search features like query understanding, question answering, summarization, and entity recognition Build and maintain robust, cloud-native APIs and backend services to support high-availability search infrastructure (e.g., AWS, GCP, Azure Implement CI/CD pipelines, observability, and monitoring for production-grade search systems Work closely with Product Owners, Tech Architect. Enable indexing of both structured (e.g., clinical trial metadata) and unstructured (e.g., PDFs, research papers) content Design & develop modern data management tools to curate our most important data sets, models and processes, while identifying areas for process automation and further efficiencies Expertise in programming languages such as Python, Java, React, typescript, or similar. Strong experience with data storage and processing technologies (e.g., Hadoop, Spark, Kafka, Airflow, SQL/NoSQL databases). Demonstrate strong initiative and ability to work with minimal supervision or direction Strong experience with cloud infrastructure (AWS, Azure, or GCP) and infrastructure as code like Terraform In-depth knowledge of relational and columnar SQL databases, including database design Expertise in data warehousing concepts (e.g. star schema, entitlement implementations, SQL v/s NoSQL modeling, milestoning, indexing, partitioning) Experience in REST and/or GraphQL Experience in creating Spark jobs for data transformation and aggregation Experience with distributed, multi-tiered systems, algorithms, and relational databases. Possesses strong rapid prototyping skills and can quickly translate concepts into working code Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Analyze and understand the functional and technical requirements of applications Identify and resolve software bugs and performance issues Work closely with multi-functional teams, including product management, design, and QA, to deliver high-quality software on time Maintain detailed documentation of software designs, code, and development processes Basic Qualifications: Degree in computer science & engineering preferred with 6-8 years of software development experience Proficient in Python, Java, React, typescript, Postgres, Databricks Hands-on experience with search technologies (Elasticsearch, Solr, OpenSearch, or Lucene). Hands on experience with Full Stack software development. Proficient in programming languages, Java, Python, Fast Python, Databricks/RDS, Data engineering, S3Buckets, ETL, Hadoop, Spark, airflow, AWS Lambda Experience with data streaming frameworks (Apache Kafka, Flink). Experience with cloud platforms (AWS, Azure, Google Cloud) and related services (e.g., S3, Redshift, Big Query, Databricks) Hands on experience with various cloud services, understand pros and cons of various cloud services in well architected cloud design principles Working knowledge of open-source tools such as AWS lambda. Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Preferred Qualifications: Experience in Python, Java, React, Fast Python, Typescript, JavaScript, CSS HTML is desirable Experienced with API integration, serverless, microservices architecture. Experience in Data bricks, PySpark, Spark, SQL, ETL, Kafka Solid understanding of data governance, data security, and data quality best practices Experience with Unit Testing, Building and Debugging the Code Experienced with AWS /Azure Platform, Building and deploying the code Experience in vector database for large language models, Databricks or RDS Experience with DevOps CICD build and deployment pipeline Experience in Agile software development methodologies Experience in End-to-End testing Experience in additional Modern Database terminologies. Good To Have Skills Willingness to work on AI Applications Experience in NLMs, Solr Search Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global teams. High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. Thrive What You Can Expect From Us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Join our dynamic team to elevate your cloud management skills and drive impactful solutions. As a CTIM AWS and Databricks LOB SRE-Platform Operations member, you will manage cloud infrastructure, provide daily support, and collaborate with technology teams to resolve issues efficiently. Your expertise in AWS and Python is crucial for success in this role. Job Responsibilities Act as the first point of contact for AWS & Databricks inquiries or issues. Collaborate with technology teams to enhance automation within AWS. Identify and implement cost optimization strategies in AWS. Focus on continuous service improvement and efficiency enhancement. Manage stakeholder communication during technology incidents. Document support processes and FAQs to address knowledge gaps. Foster relationships with SRE Technology teams and business user community. Required Qualifications, Capabilities, And Skills Formal training or certification in software engineering concepts and 3+ years of applied experience. Expertise in performing a Data Engineer role. along with good experience of programming with Python/Scala, Spark. Hands-on experience in AWS migration and implementation. Experience with DevOps CI/CD tools (e.g., Git, Jenkins). Proficiency in AWS services like EC2, S3, Lambda, RDS, Redshift. Strong analytical and problem-solving skills. Technical knowledge of Unix/Linux platforms. Excellent interpersonal skills and understanding of business processes. Preferred Qualifications, Capabilities, And Skills Business domain knowledge in Risk Management, Finance, or Compliance. Experience with big data architecture (Hadoop stack). Hands-on experience in Databricks for data engineering or SRE roles. Familiarity with stream-processing systems like Storm, Spark-Streaming. ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Show more Show less
Posted 2 weeks ago
3.0 - 4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family-AWS Cloud Native – Full Stack Engineer The opportunity We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Digital Engineering Team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including Asset management, Banking and Capital Markets, Insurance and Private Equity, Health, Government, Power and Utilities, we provide integrated advisory, assurance, tax, and transaction services. Through diverse experiences, world-class learning and individually tailored coaching you will experience ongoing professional development. That’s how we develop outstanding leaders who team to deliver on our promises to all of our stakeholders, and in so doing, play a critical role in building a better working world for our people, for our clients and for our communities. Sound interesting? Well, this is just the beginning. Because whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. We’re looking for an AWS Cloud Native – Full Stack Engineer at EY GDS. You will work on designing and implementing cloud-native applications and services using AWS technologies. You will collaborate with development teams to build, deploy, and manage applications that meet business needs and leverage AWS best practices. This is a fantastic opportunity to be part of a leading firm whilst being instrumental in the growth of a new service offering. We are the only professional services organization who has a separate business dedicated exclusively to the financial and non-financial services marketplace. Join Digital Engineering team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. EY Digital Engineering is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge. The Digital Engineering (DE) practice works with clients to analyse, formulate, design, mobilize and drive digital transformation initiatives. We advise clients on their most pressing digital challenges and opportunities surround business strategy, customer, growth, profit optimization, innovation, technology strategy, and digital transformation. We also have a unique ability to help our clients translate strategy into actionable technical design, and transformation planning/mobilization. Through our unique combination of competencies and solutions, EY’s DE team helps our clients sustain competitive advantage and profitability by developing strategies to stay ahead of the rapid pace of change and disruption and supporting the execution of complex transformations. Your Key Responsibilities Application Development: Design and develop cloud-native applications and services using Angular/React/Typescript, Java Springboot /Node, AWS services such as Lambda, API Gateway, ECS, EKS, and DynamoDB, Glue, Redshift, EMR. Deployment and Automation: Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy to automate application deployment and updates. Architecture Design: Collaborate with architects and other engineers to design scalable and secure application architectures on AWS. Performance Tuning: Monitor application performance and implement optimizations to enhance reliability, scalability, and efficiency. Security: Implement security best practices for AWS applications, including identity and access management (IAM), encryption, and secure coding practices. Container Services Management: Design and deploy containerized applications using AWS services such as Amazon ECS (Elastic Container Service), Amazon EKS (Elastic Kubernetes Service), and AWS Fargate. Configure and manage container orchestration, scaling, and deployment strategies. Optimize container performance and resource utilization by tuning settings and configurations. Application Observability: Implement and manage application observability tools such as AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack (Elasticsearch, Logstash, Kibana). Develop and configure monitoring, logging, and alerting systems to provide insights into application performance and health. Create dashboards and reports to visualize application metrics and logs for proactive monitoring and troubleshooting. Integration: Integrate AWS services with application components and external systems, ensuring smooth and efficient data flow. Troubleshooting: Diagnose and resolve issues related to application performance, availability, and reliability. Documentation: Create and maintain comprehensive documentation for application design, deployment processes, and configuration. Skills And Attributes For Success Required Skills: AWS Services: Proficiency in AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Glue, Redshift, EMR. Programming: Strong programming skills in languages such as Python, Java, or Node.js, Angular/React/Typescript. CI/CD: Experience with CI/CD tools and practices, including AWS CodePipeline, CodeBuild, and CodeDeploy. Infrastructure as Code: Familiarity with IaC tools like AWS CloudFormation or Terraform for automating application infrastructure. Security: Understanding of AWS security best practices, including IAM, KMS, and encryption. Observability Tools: Proficiency in using observability tools like AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack. Container Orchestration: Knowledge of container orchestration concepts and tools, including Kubernetes and Docker Swarm. Monitoring: Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or ELK Stack. Collaboration: Strong teamwork and communication skills with the ability to work effectively with cross-functional teams. Preferred Qualifications: Certifications: AWS Certified Solutions Architect – Associate or Professional, AWS Certified Developer – Associate, or similar certifications. Experience: 3-4 Years previous experience in an application engineering role with a focus on AWS technologies. Agile Methodologies: Familiarity with Agile development practices and methodologies. Problem-Solving: Strong analytical skills with the ability to troubleshoot and resolve complex issues. Education: Degree: Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent practical experience What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 weeks ago
45.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 45 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description We are seeking a detail-oriented and highly skilled Data Engineering Test Automation Engineer with deep expertise of R&D domain in life sciences to ensure the quality, reliability, and performance of our data pipelines and platforms. The ideal candidate will have a strong background in data testing , ETL validation , and test automation frameworks . You will work closely with data engineers, analysts, and DevOps teams to build robust test suites for large-scale data solutions. This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, data accuracy, completeness, consistency , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities Design, develop, and maintain automated test scripts for data pipelines, ETL jobs, and data integrations. Validate data accuracy, completeness, transformations, and integrity across multiple systems. Collaborate with data engineers to define test cases and establish data quality metrics. Develop reusable test automation frameworks and CI/CD integrations (e.g., Jenkins, GitHub Actions). Perform performance and load testing for data systems. Maintain test data management and data mocking strategies. Identify and track data quality issues, ensuring timely resolution. Perform root cause analysis and drive corrective actions. Contribute to QA ceremonies (standups, planning, retrospectives) and drive continuous improvement in QA processes and culture. Must-Have Skills Experience in QA roles, with strong exposure to data pipeline validation and ETL Testing. Domin Knowledge of R&D domain of life science. Validate data accuracy, transformations, schema compliance, and completeness across systems using PySpark and SQL. Strong hands-on experience with Python, and optionally PySpark, for developing automated data validation scripts. Proven experience in validating ETL workflows, with a solid understanding of data transformation logic, schema comparison, and source-to-target mapping. Experience working with data integration and processing platforms like Databricks/Snowflake, AWS EMR, Redshift etc… Experience in manual and automated testing of data pipelines executions for both batch and real-time data pipelines. Perform performance testing of large-scale complex data engineering pipelines. Ability to troubleshoot data issues independently and collaborate with engineering teams for root cause analysis Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Hands-on experience with API testing using Postman, pytest, or custom automation scripts Experience integrating automated tests into CI/CD pipelines using tools like Jenkins, GitHub Actions, or similar. Knowledge of cloud platforms such as AWS, Azure, GCP. Good-to-Have Skills Certifications in Databricks, AWS, Azure, or data QA (e.g., ISTQB). Understanding of data privacy, compliance, and governance frameworks. Knowledge of UI automated testing frameworks like Selenium, JUnit, TestNG Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Master’s degree and 3 to 7 years of Computer Science, IT or related field experience Bachelor’s degree and 4 to 9 years of Computer Science, IT or related field experience Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less
Posted 2 weeks ago
45.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 45 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description We are seeking a detail-oriented and highly skilled Data Engineering Test Automation Engineer to ensure the quality, reliability, and performance of our data pipelines and platforms. The ideal candidate will have a strong background in data testing , ETL validation , and test automation frameworks . You will work closely with data engineers, analysts, and DevOps teams to build robust test suites for large-scale data solutions. This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, data accuracy, completeness, consistency , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities Design, develop, and maintain automated test scripts for data pipelines, ETL jobs, and data integrations. Validate data accuracy, completeness, transformations, and integrity across multiple systems. Collaborate with data engineers to define test cases and establish data quality metrics. Develop reusable test automation frameworks and CI/CD integrations (e.g., Jenkins, GitHub Actions). Perform performance and load testing for data systems. Maintain test data management and data mocking strategies. Identify and track data quality issues, ensuring timely resolution. Perform root cause analysis and drive corrective actions. Contribute to QA ceremonies (standups, planning, retrospectives) and drive continuous improvement in QA processes and culture. Must-Have Skills Experience in QA roles, with strong exposure to data pipeline validation and ETL Testing. Domain Knowledge of R&D domain of life science. Validate data accuracy, transformations, schema compliance, and completeness across systems using PySpark and SQL. Strong hands-on experience with Python, and optionally PySpark, for developing automated data validation scripts. Proven experience in validating ETL workflows, with a solid understanding of data transformation logic, schema comparison, and source-to-target mapping. Experience working with data integration and processing platforms like Databricks/Snowflake, AWS EMR, Redshift etc… Experience in manual and automated testing of data pipelines executions for both batch and real-time data pipelines. Perform performance testing of large-scale complex data engineering pipelines. Ability to troubleshoot data issues independently and collaborate with engineering teams for root cause analysis Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Hands-on experience with API testing using Postman, pytest, or custom automation scripts Experience integrating automated tests into CI/CD pipelines using tools like Jenkins, GitHub Actions, or similar. Knowledge of cloud platforms such as AWS, Azure, GCP. Good-to-Have Skills Certifications in Databricks, AWS, Azure, or data QA (e.g., ISTQB). Understanding of data privacy, compliance, and governance frameworks. Knowledge of UI automated testing frameworks like Selenium, JUnit, TestNG Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Master’s degree and 3 to 7 years of Computer Science, IT or related field experience OR Bachelor’s degree and 4 to 9 years of Computer Science, IT or related field experience Soft Skills Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and encouraging team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We're committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We develop a growing internal community and are committed to creating a workplace that looks like the world that we serve. Pay and Benefits: Competitive compensation, including base pay and annual incentive. Comprehensive health and life insurance and well-being benefits, based on location. Pension / Retirement benefits Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (Tuesdays, Wednesdays, and a day unique to each team or employee). The impact you will have in this role: Data Quality and Integration role is a highly technical position – considered a technical expert in system implementation - with an emphasis on providing design, ETL, data quality and warehouse modeling expertise. This role will be accountable for, knowledge of capital development efforts. Performs in an experienced level the technical design of application components, builds applications, interfaces between applications, and understands data security, retention, and recovery. Can research technologies independently and recommend appropriate solutions. Contributes to technology-specific best practices & standards; gives to success criteria from design through deployment, including, reliability, cost-effectiveness, performance, data integrity, maintainability, reuse, extensibility, usability and scalability; gives expertise on significant application components, vendor products, program languages, databases, operating systems, etc, completes the plan by building components, testing, configuring, tuning, and deploying solutions. Software Engineer (SE) for Data Quality and Integration applies specific technical knowledge of data quality and data integration in order to assist in the design and construction of critical systems. The SE works as part of an AD project squad and may interact with the business, Functional Architects, and domain experts on related integrating systems. The SE will contribute to the design of components or individual programs and participates fully in the construction and testing. This involves working with the Senior Application Architect, and other technical contributors at all levels. This position contributes expertise to project teams through all phases, including post-deployment support. This means researching specific technologies, and applications, and contributing to the solution design, supporting development teams, testing, troubleshooting, and production support. The ASD must possess experience in integrating large volumes of data, efficiently and in a timely manner. This position requires working closely with the functional and governance functions, and more senior technical resources, reviewing technical designs and specifications, and contributing to cost estimates and schedules. What You'll Do: Technology Expertise – is a domain expert on one or more of programming languages, vendor products specifically, Informatica Data Quality and Informatica Data Integration Hub, DTCC applications, data structures, business lines. Platforms – works with Infrastructure partners to stand up development, testing, and production environments Requirements Elaboration – works with the Functional Architect to ensure designs satisfy functional requirements Data Modeling – reviews and extends data models Data Quality Concepts – Experience in Data Profiling, Scorecards, Monitoring, Matching, Cleansing Is aware of frameworks – that promote concepts of isolation, extensibility, and extendibility System Performance – contributes to solutions that satisfy performance requirements; constructs test cases and strategies that account for performance requirements; tunes application performance issues Security – implements solutions and complete test plans working mentoring other team members in standard process Standards – is aware of technology standards and understands technical solutions need to be consistent with them Documentation – develops and maintains system documentation Is familiar with different software development methodologies (Waterfall, Agile, Scrum, Kanban) Aligns risk and control processes into day to day responsibilities to monitor and mitigate risk; escalates appropriately Educational background and work experience that includes mathematics and conversion of expressions into run time executable code. Ensures own and team’s practices support success across all geographic locations Mitigates risk by following established procedures and monitoring controls, spotting key errors and demonstrating strong ethical behavior. Helps roll out standards and policies to other team members. Financial Industry Experience including Trades, Clearing and Settlement Education: Bachelor's degree or equivalent experience. Talents Needed for Success: Minimum of 3+ years in Data Quality and Integration. Basic understanding of Logical Data Modeling and Database design is a plus Technical experience with multiple database platforms: Sybase, Oracle, DB2 and distributed databases like Teradata/Greenplum/Redshift/Snowflake containing high volumes of data. Knowledge of data management processes and standard methodologies preferred Proficiency with Microsoft Office tools required Supports team in managing client expectations and resolving issues on time. Technical skills highly preferred along with strong analytical skills. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
India
On-site
Description Analytics team provides analytical support to multiple stakeholders (Product, Engineering, Business development, Ad operations) by developing scalable analytical solutions, identifying problems, coming up with KPIs and monitor those to measure impact/success of product improvements/changes and streamlining processes. This will be an exciting and challenging role that will enable you to work with large data sets, expose you to cutting edge analytical techniques, work with latest AWS analytics infrastructure (Redshift, s3, Athena, and gain experience in the usage of location data to drive businesses. Working in a dynamic start up environment will give you significant opportunities for growth within the organization. A successful applicant will be passionate about technology and developing a deep understanding of human behavior in the real world. They would also have excellent communication skills, be able to synthesize and present complex information and be a fast learner. You Will Perform root cause analysis with minimum guidance to figure out reasons for sudden changes/abnormalities in metrics Understand objective/business context of various tasks and seek clarity by collaborating with different stakeholders (like Product, Engineering Derive insights and putting them together to build a story to solve a given problem Suggest ways for process improvements in terms of script optimization, automating repetitive tasks Create and automate reports and dashboards through Python to track certain metrics basis given requirements Automate reports and dashboards through Python Technical Skills (Must Have) 4-year B.Tech degree in Computer Science, Statistics, Mathematics, Economics or related fields 2-4 years of experience in working with data and conducting statistical and/or numerical analysis Ability to write SQL code Scripting/automation using python Hands on experience in data visualisation tool like Looker/Tableau/Quicksight Basic to advance level understanding of statistics Other Skills (Must Have) Be willing and able to quickly learn about new businesses, database technologies and analysis techniques Strong oral and written communication Understanding of patterns/trends and draw insights from those Preferred Qualifications (Nice to have) Experience working with large datasets Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3) Benefits At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love. Parental leave- Maternity and Paternity Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays) In Office Daily Catered Breakfast, Lunch, Snacks and Beverages Health cover for any hospitalization. Covers both nuclear family and parents Tele-med for free doctor consultation, discounts on health checkups and medicines Wellness/Gym Reimbursement Pet Expense Reimbursement Childcare Expenses and reimbursements Employee referral program Education reimbursement program Skill development program Cell phone reimbursement (Mobile Subsidy program). Internet reimbursement/Postpaid cell phone bill/or both. Birthday treat reimbursement Employee Provident Fund Scheme offering different tax saving options such as Voluntary Provident Fund and employee and employer contribution up to 12% Basic Creche reimbursement Co-working space reimbursement National Pension System employer match Meal card for tax benefit Special benefits on salary account Show more Show less
Posted 2 weeks ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 5 to 8 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job Overview And Responsibilities The United Data Engineering team designs, develops and maintains massively scaling technology solutions that are brought to life with innovative architectures, data analytics and digital solutions. The Data Engineering team is building a modern data technology platform in the cloud with advanced DevOps and Machine Learning capabilities. The Data Engineering team at United Airlines is on a transformational journey to unlock the full potential of enterprise data, build a dynamic, diverse and inclusive culture and develop a modern cloud-based data lake architecture to scale our applications, and drive growth using data and machine learning. Our objective is to enable the enterprise to unleash the potential of data through innovation and agile thinking, and to execute on an effective data strategy to transform business processes, rapidly accelerate time to market and enable insightful decision making. United Airlines is seeking talented people to join the Data Engineering team. Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial and operational projects with a digital focus. Partner with various teams to define and execute data acquisition, transformation, processing and make data actionable for operational and analytics initiatives that create sustainable revenue and share growth Design, develop, and implement streaming and near-real time data pipelines that feed systems that are the operational backbone of our business Utilize programming languages like Java, Scala, Python with RDBMS, NoSQL databases and Cloud based data warehousing like AWS Redshift Execute unit tests and validating expected results to ensure accuracy & integrity of data and applications through analysis, coding, writing clear documentation and problem resolution Drive the adoption of data processing and analysis using AWS services and help cross train other members of the team Leverage strategic and analytical skills to understand and solve customer and business centric questions Coordinate and guide cross-functional projects that involve team members across all areas of the enterprise, vendors, external agencies and partners Leverage data from a variety of sources to develop data marts and insights that provide a comprehensive understanding of the business Develop and implement innovative solutions leading to automation Mentor and train junior engineers Use of Agile methodologies to manage projects Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions in full-stack development tools and technologies Expand and share your passion by staying on top of tech trends, experimenting with and learning new technologies, and mentoring other members of the engineering community Work with a team of developers with deep experience in Digital technology, machine learning, distributed micro services, and full stack systems This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications - External Required Bachelor’s Degree in computer science or related STEM field Experience with relational database systems like MS SQL Server, Oracle, Teradata Excellent O/S knowledge and experience in Linux or Windows with basic knowledge of the other MCSE/RHCE or equivalent level of knowledge preferred Experience of implementing and supporting AWS based instances and services (e.g. EC2, S3, EBS, ELB, RDS, IAM, Route53, Cloud front, Elastic cache, WAF etc.) Scripting ability in one or more of Python, Bash, Perl Git for version control useful Working with or supporting containerized environments (ECS/EKS/Kubernetes/Docker) Agile engineering practices Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Qualifications Preferred Masters in computer science or related STEM field Experience with cloud based systems like AWS, AZURE Strong experience with continuous integration & delivery using Agile methodologies AWS Certified Developer – Associate or Professional AWS Certified Solutions Architect - Associate or Professional AWS Certified Specialty certification – (Big Data Analytics, Machine Learning) Experience with continuous integration & delivery using Agile methods Experience with Data Quality tolls including Deequ or Apache Griffin Experience building PySpark based services in a production environment GGN00001797 Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Get to know Okta Okta is The World’s Identity Company. We free everyone to safely use any technology—anywhere, on any device or app. Our Workforce and Customer Identity Clouds enable secure yet flexible access, authentication, and automation that transforms how people move through the digital world, putting Identity at the heart of business security and growth. At Okta, we celebrate a variety of perspectives and experiences. We are not looking for someone who checks every single box - we’re looking for lifelong learners and people who can make us better with their unique experiences. Join our team! We’re building a world where Identity belongs to you. The Opportunity: Are you a data-driven storyteller with a passion for transforming raw information into actionable insights that drive tangible business outcomes? Do you thrive on collaborating directly with business stakeholders to understand their needs and then architecting elegant data solutions? If so, we have an exciting opportunity for a highly skilled and motivated Manager-level Business Analytics to join our growing team in Bangalore. In this role, you will be instrumental in empowering our business users with the data and visualizations they need to make informed decisions, analyze and drive improvements to our Finance operational performance, business decisions, and strategy. You will drive the analytics lifecycle, from initial consultation to the delivery of impactful dashboards and data sets. Please note - this role operates to PST, so you will work from 5pm - 2am IST. This role is hybrid, with the expectation of working from our local office on specified days based on local expectations. We will continuously assess this arrangement, and it may be subject to change based on business needs and evolving circumstances. What You'll Do: Strategic Alignment: Align analytics initiatives with key business objectives and contribute to the development of data-driven strategies that lead to measurable improvements. Become a Trusted Advisor: Partner closely with business users across various departments to understand their strategic objectives, identify their analytical requirements, and translate those needs into clear and actionable data and reporting solutions. Consultative Analysis: Engage with stakeholders to explore their business questions, guide them on appropriate analytical approaches, and help them define key metrics and performance indicators (KPIs). Data Architecture & Design: Partner with our Data and Insights team to design and develop robust and efficient data models and datasets optimized for visualization and analysis, ensuring data accuracy and integrity. Expert Tableau Development: Leverage your deep expertise in Tableau to create intuitive, interactive, and visually compelling dashboards and reports that effectively communicate key insights and trends. Data Wrangling & Transformation: Utilize Fivetran and/or Python scripting to extract, transform, and load data from various sources into our data warehouse or analytics platforms. End-to-End Ownership: Take full ownership of the analytics projects you lead, from initial scoping and data acquisition to dashboard deployment, user training, and ongoing maintenance. Drive Data Literacy: Educate and empower business users to effectively utilize dashboards and data insights to drive business outcomes. Stay Ahead of the Curve: Continuously explore new data visualization techniques, analytical methodologies, and data technologies to enhance our analytics capabilities. Collaborate and Communicate: Effectively communicate complex analytical findings and recommendations to stakeholders. Lead cross-functional collaborations to achieve project goals. Data Governance & Quality: Ensure data accuracy, consistency, and integrity in all developed datasets and dashboards, contributing to data governance efforts. Performance Monitoring & Iteration: Monitor the performance and user adoption of developed dashboards, gather feedback, and implement necessary revisions for continuous improvement. Documentation & Training: Develop comprehensive documentation for created dashboards and datasets. Provide training and support to business users to ensure effective utilization of analytics tools. What You'll Bring: 5-7+ years of experience in a Business Analytics, Data Analytics, or similar role supporting Finance teams with increasing responsibility. Proven experience working directly with business stakeholders to understand their needs and deliver data-driven solutions. Expert-level proficiency in Tableau, including advanced calculations, parameters, actions, and performance optimization. Strong hands-on experience in building and optimizing data sets for Tableau. Solid experience with data integration tools, preferably Fivetran, and the ability to design and implement data pipelines. Proficiency in Python for data manipulation, cleaning, and transformation (e.g., using libraries like Pandas). Strong understanding of data warehousing principles and experience with platforms such as Snowflake or Redshift. Advanced SQL skills for data extraction, transformation, and querying. Excellent problem-solving and analytical skills with a strong attention to detail and the ability to translate business questions into analytical frameworks. Exceptional written and verbal communication skills, with the ability to present complex data insights effectively to both technical and non-technical audiences. Proven ability to manage multiple analytics projects simultaneously, prioritize tasks, and meet deadlines effectively. Excellent collaboration and interpersonal skills with the ability to build strong working relationships with business stakeholders. Bachelor's degree in a quantitative field such as Engineering, Finance/Accounting/ Business, Economics, Statistics, Mathematics, Computer Science, or a related discipline. (Master's degree a plus). Bonus Points For: Experience in SaaS software industry is highly preferred. Experience with other data visualization tools (e.g., Power BI, Looker). Familiarity with cloud-based data platforms (e.g., BigQuery). Familiarity with basic statistical concepts and methodologies is a plus. What you can look forward to as a Full-Time Okta employee! Amazing Benefits Making Social Impact Developing Talent and Fostering Connection + Community at Okta Okta cultivates a dynamic work environment, providing the best tools, technology and benefits to empower our employees to work productively in a setting that best and uniquely suits their needs. Each organization is unique in the degree of flexibility and mobility in which they work so that all employees are enabled to be their most creative and successful versions of themselves, regardless of where they live. Find your place at Okta today! https://www.okta.com/company/careers/. Some roles may require travel to one of our office locations for in-person onboarding. Okta is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, marital status, age, physical or mental disability, or status as a protected veteran. We also consider for employment qualified applicants with arrest and convictions records, consistent with applicable laws. If reasonable accommodation is needed to complete any part of the job application, interview process, or onboarding please use this Form to request an accommodation. Okta is committed to complying with applicable data privacy and security laws and regulations. For more information, please see our Privacy Policy at https://www.okta.com/privacy-policy/. Show more Show less
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are looking for a passionate and curious Data Engineer (Entry-Level) to join our DataTech team. This is an excellent opportunity for fresh graduates or early-career professionals to build a strong foundation in data engineering by working on real-world data problems, building robust pipelines, and collaborating across teams. Responsibilities Assist in developing scalable and optimized data pipelines for extraction, transformation, and loading (ETL). Write clean and efficient Python scripts and SQL queries for data processing and analysis. Work on integrating multiple data sources (databases, APIs, flat files, etc. ) Support team members in ensuring data quality, accuracy, and consistency. Collaborate with analysts, data scientists, and engineers to deliver data solutions. Contribute to automation initiatives and help build internal data tools. Participate in code reviews, documentation, and team meetings to learn best practices in software and data engineering. Assist in building computer vision or different AI models. Requirements B. E. /B. Tech in Computer Science, Information Technology, or related field (Tier I college background is a plus). Strong foundation in: Python (Pandas, NumPy, basic scripting), SQL (writing basic to intermediate queries). Exposure to: Git and Linux Shell Scripting, Data visualization tools or dashboards (e. g., Tableau, Power BI - optional). Bonus if you've explored tools like Airflow, BigQuery, AWS S3/Redshift, or Firebase. Built projects or done internships related to data engineering, analytics, or backend systems. Strong logical thinking, curiosity to learn, and willingness to dive into data problems. This job was posted by Payal Verma from Box8. Show more Show less
Posted 2 weeks ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
DESCRIPTION Amazon's Financial Technology team is looking for a passionate, results-oriented, inventive software developers who can work on massively scalable and distributed systems. The candidate thrives in a fast-paced environment, understands how to deal with large sets of data and transactions and will help us deliver on a new generation of software, leveraging Amazon Web Services. The candidate is passionate about technology and wants to be involved with real business problems. Our platform serves Amazon's finance, tax and accounting functions across the globe. As a member of this team, your mission will be to design, develop, document and support massively scalable, distributed real time systems. Using C++, python,Java, object oriented design patterns, distributed databases and other innovative storage techniques, you will build and deliver software systems that support complex and rapidly evolving business requirements. We're looking for individuals with a range of experience, from brilliant and motivated new college graduates to technical leaders with the scars and battle-tested wisdom. If you can think big and want to join a fast moving team breaking new ground at Amazon, and you meet the qualifications below, we would like to speak with you! Key job responsibilities Design and build solutions that offers optimised experience for Tax customers. Play a key role in the definition, vision, design, roadmap and development of a new platform from beginning to end. Work through all phases of the project lifecycle, including reviewing requirements, designing services that lay foundation for the new technology platform, building new interfaces and also integrate with existing architectures, developing and testing code, and delivering seamless implementations for Global Tax customers. Be very hands-on; work with others on the engineering team to manage the day-to-day activities, participate in designs, design review, code review. Evaluate and utilize AWS technologies where appropriate, such as EC RDS/DynamoDB/RedShift, SWF, S3 and QuickSight to and build backend services and customer facing APIs. Design and code technical solutions in tools such as , Angular JS, React, Node.js, JQuery and/ SQL Server to deliver value to tax customers. Contribute to a suite of tools that will be hosted on the AWS infrastructure. Build dashboard and visualization services. Work with variety of tools across the spectrum of lifecycle. Build iteratively using agile methodologies. A day in the life This role works closely with Tax teams, understand their business problems, adjust quickly to changing needs and aligns with the customer priorities. Should be able to act fast, build quick solutions, create POCs and suggest right solutions to the customers. Should work closely with peer engineers, involve in designing solutions, performing deep dives and contribute to quality code reviews. Should shoulder the responsiblities of owning and resolving operational tickets along with the peers. About The Team Our charter includes building software solutions for tax compliance and auditing teams with special focus on Geo-Expansions, that includes in-country, new-country, merges and acquisitions. We work closely with various stake holders - Tax Compliance and Auditing teams spread across different geographies. We are currently involved in building data lineage based reporting solution that allows tax teams to generate report based on the linkages of transactional data across various financial systems About The Team Our charter includes building software solutions for tax compliance and auditing teams with special focus on Mexico Tax Compliance Reporting , that includes in-country, new-country, merges and acquisitions. We work closely with various stake holders - Tax Compliance and Auditing teams spread across different geographies. We are currently involved in building self-server reporting solution that allows tax teams to generate report based on the linkages of transactional data across various financial systems BASIC QUALIFICATIONS 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language PREFERRED QUALIFICATIONS 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Bachelor's degree in computer science or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A2977190 Show more Show less
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command over modern data stacks. You'll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams. This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools Responsibilities Design and optimize complex SQL queries, stored procedures, and indexes. Perform performance tuning and query plan analysis. Contribute to schema design and data normalization. Migrate data from multiple sources to cloud or ODS platforms. Design schema mapping and implement transformation logic. Ensure consistency, integrity, and accuracy in migrated data. Build automation scripts for data ingestion, cleansing, and transformation. Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e. g., Boto3). Maintain reusable script modules for operational pipelines. Develop and manage DAGs for batch/stream workflows. Implement retries, task dependencies, notifications, and failure handling. Integrate Airflow with cloud services, data lakes, and data warehouses. Manage data storage (S3 GCS, Blob), compute services, and data pipelines. Set up permissions, IAM roles, encryption, and logging for security. Monitor and optimize the cost and performance of cloud-based data operations. Design and manage data marts using dimensional models. Build star/snowflake schemas to support BI and self-serve analytics. Enable incremental load strategies and partitioning. Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka. Support modular pipeline design and metadata-driven frameworks. Ensure high availability and scalability of the stack. Collaborate with BI teams to design datasets and optimize queries. Support the development of dashboards and reporting layers. Manage access, data refreshes, and performance for BI tools. Requirements 4-6 years of hands-on experience in data engineering roles. Strong SQL skills in PostgreSQL (tuning, complex joins, procedures). Advanced Python scripting skills for automation and ETL. Proven experience with Apache Airflow (custom DAGs, error handling). Solid understanding of cloud architecture (especially AWS). Experience with data marts and dimensional data modeling. Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc. ) Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI. Version control (Git) and CI/CD pipeline knowledge are a plus. Excellent problem-solving and communication skills. This job was posted by Suryansh Singh Karchuli from ShepHertz Technologies. Show more Show less
Posted 2 weeks ago
6.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
As a Data Platform Lead, you will utilize your strong technical background and hands-on development skills to design, develop, and maintain data platforms. Leading a team of skilled data engineers, you will create scalable and robust data solutions that enhance business intelligence and decision-making. You will ensure the reliability, efficiency, and scalability of data systems while mentoring your team to achieve excellence. Collaborating closely with our client's CXO-level stakeholders, you will oversee pre-sales activities, solution architecture, and project execution. Your ability to stay ahead of industry trends and integrate the latest technologies will be crucial in maintaining our competitive edge. Responsibilities Client-Centric Approach: Understand client requirements deeply and translate them into robust technical specifications, ensuring solutions meet their business needs. Architect for Success: Design scalable, reliable, and high-performance systems that exceed client expectations and drive business success. Lead with Innovation: Provide technical guidance, support, and mentorship to the development team, driving the adoption of cutting-edge technologies and best practices. Champion Best Practices: Ensure excellence in software development and IT service delivery, constantly assessing and evaluating new technologies, tools, and platforms for project suitability. Be the Go-To Expert: Serve as the primary point of contact for clients throughout the project lifecycle, ensuring clear communication and high levels of satisfaction. Build Strong Relationships: Cultivate and manage relationships with CxO/VP-level stakeholders, positioning yourself as a trusted advisor. Deliver Excellence: Manage end-to-end delivery of multiple projects, ensuring timely and high-quality outcomes that align with business goals. Report with Clarity: Prepare and present regular project status reports to stakeholders, ensuring transparency and alignment. Collaborate Seamlessly: Coordinate with cross-functional teams to ensure smooth and efficient project execution, breaking down silos and fostering collaboration. Grow the Team: Provide timely and constructive feedback to support the professional growth of team members, creating a high-performance culture. Requirements Master's (M. Tech., M. S. ) in Computer Science or equivalent from reputed institutes like IIT, NIT preferred. Overall 6-8 years of experience with a minimum of 2 years of relevant experience and a strong technical background. Experience working in a mid-size IT Services company is preferred. Preferred Certification (one Or More) AWS Certified Data Analytics Specialty. AWS Solution Architect Professional. Azure Data Engineer + Solution Architect. Databricks Certified Data Engineer / ML Professional. Technical Expertise Advanced knowledge of distributed architectures and data modeling practices. Extensive experience with Data Lakehouse systems like Databricks and data warehousing solutions such as Redshift and Snowflake. Hands-on experience with data technologies such as Apache Spark, SQL, Airflow, Kafka, Jenkins, Hadoop, Flink, Hive, Pig, HBase, Presto, and Cassandra. Knowledge in BI tools, including Power BI, Tableau, Quicksight, and open source equivalents like Superset and Metabase, is good to have. Strong knowledge of data storage formats, including Iceberg, Hudi, and Delta. Proficient programming skills in Python, Scala, Go, or Java. Ability to architect end-to-end solutions from data ingestion to insights, including designing data integrations using ETL and other data integration patterns. Experience working with multi-cloud environments, particularly AWS and Azure. Excellent teamwork and communication skills, with the ability to thrive in a fast-paced, agile environment. This job was posted by Mrinalini Singh from Oneture Technologies. Show more Show less
Posted 2 weeks ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Analytics - Risk Product About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. About the Role: We seek an experienced Assistant General Manager - Analytics for data analysis and reporting across our lending verticals. The ideal candidate will use SQL and dashboarding tools to deliver actionable insights and manage data needs for multiple lending verticals. A drive to implement AI to automate repetitive workflows is essential. Key Responsibilities: Develop, maintain, and automate reporting and dashboards for lending vertical KPIs. Manage data and analytics requirements for multiple lending verticals. Collaborate with stakeholders to understand data needs and provide support. Analyze data trends to provide insights and recommendations. Design and implement data methodologies to improve data quality. Ensure data accuracy and integrity. Communicate findings to technical and non-technical audiences. Stay updated on data analytics trends and identify opportunities for AI implementation. Drive the use of AI to automate repetitive data workflows. Qualifications Bachelor's degree in a quantitative field. 5-7 years of data analytics experience. Strong SQL and Pyspark proficiency. Experience with data visualization tools (e.g., Tableau, Power BI, Looker). Lending/financial services experience is a plus. Excellent analytical and problem-solving skills. Strong communication and presentation skills. Ability to manage multiple projects. Ability to work independently and in a team. Demonstrated drive to use and implement AI for automation. Preferred Qualifications Experience with statistical modeling and data mining. Familiarity with cloud data warehousing (e.g., Snowflake, BigQuery, Redshift). Experience with Python or R. Experience implementing AI solutions in a business setting Why Join us? Bragging rights to be behind the largest fintech lending play in India A fun, energetic and a once-in-a-lifetime environment that enables you to achieve your best possible outcome in your career With enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story! Show more Show less
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2