Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 10.0 years
0 Lacs
karnataka
On-site
Looking for a DBT Developer with 5 to 10 years of experience We invite applications for the role of Lead Consultant, DBT Data Engineer! As a DBT Data Engineer, you will be responsible for providing technical direction and leading a group of one or more developers to achieve a common goal. Your responsibilities will include designing, developing, and automating ETL processes using DBT and AWS. You will be tasked with building robust data pipelines to transfer data from various sources to data warehouses or data lakes. Collaborating with cross-functional teams is crucial to ensure data accuracy, completeness, and consistency. Data cleansing, validation, and transformation are essential to maintain data quality and integrity. Optimizing database and query performance will be part of your responsibilities to ensure efficient data processing. Working closely with data analysts and data scientists, you will provide clean and reliable data for analysis and modeling. Your role will involve writing SQL queries against Snowflake, developing scripts for Extract, Load, and Transform operations. Hands-on experience with Snowflake utilities such as SnowSQL, SnowPipe, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures, and UDFs is required. Proficiency with Snowflake cloud data warehouse and AWS S3 bucket or Azure blob storage container for data integration is necessary. Additionally, you should have solid experience in Python/Pyspark integration with Snowflake and cloud services like AWS/Azure. A sound understanding of ETL tools and data integration techniques is vital for this role. You will collaborate with business stakeholders to grasp data requirements and develop ETL solutions accordingly. Strong programming skills in languages like Python, Java, and/or Scala are expected. Experience with big data technologies such as Kafka and cloud computing platforms like AWS is advantageous. Familiarity with database technologies such as SQL, NoSQL, and/or Graph databases is beneficial. Your experience in requirement gathering, analysis, designing, development, and deployment will be valuable. Building data ingestion pipelines and deploying using CI/CD tools like Azure boards, Github, and writing automated test cases are desirable skills. Client-facing project experience and knowledge of Snowflake Best Practices will be beneficial in this role. If you are a skilled DBT Data Engineer with a passion for data management and analytics, we encourage you to apply for this exciting opportunity!,
Posted 2 days ago
2.0 years
0 Lacs
India
On-site
At H1, we believe access to the best healthcare information is a basic human right. Our mission is to provide a platform that can optimally inform every doctor interaction globally. This promotes health equity and builds needed trust in healthcare systems. To accomplish this our teams harness the power of data and AI-technology to unlock groundbreaking medical insights and convert those insights into action that result in optimal patient outcomes and accelerates an equitable and inclusive drug development lifecycle. Visit h1.co to learn more about us. As a Software Engineer on the search Engineering team you will support and develop the search infrastructure of the company. This involves working with TB’s of data, indexing, ranking and retrieval of medical data to support the search in backend infra. What You'll Do At H1 The Search Engineering team is responsible for developing and maintaining the company's core search infrastructure. Our objective is to enable fast, accurate, and scalable search across terabytes of medical data. This involves building systems for efficient data ingestion, indexing, ranking, and retrieval that power key product features and user experiences. As a Software Engineer on the Search Engineering team, your day typically includes: Working with our search infrastructure – writing and maintaining code that ingests large-scale data in Elasticsearch. Designing and implementing high-performance APIs that serve search use cases with low latency. Building and maintaining end-to-end features using Node.js and GraphQL, ensuring scalability and maintainability. Collaborating with cross-functional teams – including product managers and data engineers to align on technical direction and deliver impactful features to our users. Take ownership of the search codebase–proactively debug, troubleshoot, and resolve issues quickly to ensure stability and performance. Consistently produce simple, elegant designs and write high-quality, maintainable code that can be easily understood and reused by teammates. Demonstrate a strong focus on performance optimization, ensuring systems are fast, efficient, and scalable. Communicate effectively and collaborate across teams in a fast-paced, dynamic environment. Stay up to date with the latest advancements in AI and search technologies, identifying opportunities to integrate cutting-edge capabilities into our platforms. About You You bring strong hands-on technical skills and experience in building robust backend APIs. You thrive on solving complex challenges with innovative, scalable solutions and take pride in maintaining high code quality through thorough testing.You are able to align your work with broader organizational goals and actively contribute to strategic initiatives. You proactively identify risks and propose solutions early in the project lifecycle to avoid downstream issues.You are curious, eager to learn, and excited to grow in a collaborative, high-performing engineering team environment. Requirements 1–2 years of professional experience. Strong programming skills in TypeScript, Node.js, and Python (Mandatory) Practical experience with Docker and Kubernetes Good to have: Big Data technologies (e.g., Scala, Hadoop, PySpark), Golang, GraphQL, Elasticsearch, and LLMs Not meeting all the requirements but still feel like you’d be a great fit? Tell us how you can contribute to our team in a cover letter! H1 OFFERS Full suite of health insurance options, in addition to generous paid time off Pre-planned company-wide wellness holidays Retirement options Health & charitable donation stipends Impactful Business Resource Groups Flexible work hours & the opportunity to work from anywhere The opportunity to work with leading biotech and life sciences companies in an innovative industry with a mission to improve healthcare around the globe
Posted 2 days ago
200.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Data Science (GenAI & Prompt engineering) – Bangalore Business Analytics Analyst 2 About CITI Citi's mission is to serve as a trusted partner to our clients by responsibly providing financial services that enable growth and economic progress. We have 200 years of experience helping our clients meet the world's toughest challenges and embrace its greatest opportunities. Analytics and Information Management (AIM) Citi AIM was established in 2003, and is located across multiple cities in India – Bengaluru, Chennai, Pune and Mumbai. It is a global community that objectively connects and analyzes information, to create actionable intelligence for our business leaders. It identifies fact-based opportunities for revenue growth in partnership with the businesses. The function balances customer needs, business strategy, and profit objectives using best in class and relevant analytic methodologies. What do we do? The North America Consumer Bank – Data Science and Modeling team analyzes millions of prospects and billions of customer level transactions using big data tools and machine learning, AI techniques to unlock opportunities for our clients in meeting their financial needs and create economic value for the bank. The team extracts relevant insights, identifies business opportunities, converts business problems into modeling framework, uses big data tools, latest deep learning and machine learning algorithms to build predictive models, implements solutions and designs go-to-market strategies for a huge variety of business problems. Role Description The role will be Business Analytics Analyst 2 in the Data Science and Modeling of North America Consumer Bank team The role will report to the AVP / VP leading the team What do we offer: The Next Gen Analytics (NGA) team is a part of the Analytics & Information Management (AIM) unit. The NGA modeling team will focus on the following areas of work: Role Expectations: Client Obsession – Create client centric analytic solution to business problems. Individual should be able to have a holistic view of multiple businesses and develop analytic solutions accordingly. Analytic Project Execution – Own and deliver multiple and complex analytic projects. This would require an understanding of business context, conversion of business problems in modeling, and implementing such solutions to create economic value. Domain expert – Individuals are expected to be domain expert in their sub field, as well as have a holistic view of other business lines to create better solutions. Key fields of focus are new customer acquisition, existing customer management, customer retention, product development, pricing and payment optimization and digital journey. Modeling and Tech Savvy – Always up to date with the latest use cases of modeling community, machine learning and deep learning algorithms and share knowledge within the team. Statistical mind set – Proficiency in basic statistics, hypothesis testing, segmentation and predictive modeling. Communication skills – Ability to translate and articulate technical thoughts and ideas to a larger audience including influencing skills with peers and senior management. Strong project management skills. Ability to coach and mentor juniors. Contribute to organizational initiatives in wide ranging areas including competency development, training, organizational building activities etc. Role Responsibilities: Work with large and complex datasets using a variety of tools (Python, PySpark, SQL, Hive, etc.) and frameworks to build Deep learning/generative AI solutions for various business requirements. Primary focus areas include model training/fine-tuning, model validation, model deployment, and model governance related to multiple portfolios. Design, fine-tune and implement LLMs/GenAI applications using techniques like prompt engineering, Retrieval Augmented Generation (RAG) and model fine-tuning Responsible for documenting data requirements, data collection/processing/cleaning, and exploratory data analysis, including utilizing deep learning /generative AI algorithms and, data visualization techniques. Incumbents in this role may often be referred to as Data Scientists. Specialization in marketing, risk, digital, and AML fields possible, applying Deep learning & generative AI models to innovate in these domains. Collaborate with team members and business partners to build model-driven solutions using cutting-edge Generative AI models (e.g., Large Language Models) and also at times, ML/traditional methods (XGBoost, Linear, Logistic, Segmentation, etc.) Work with model governance & fair lending teams to ensure compliance of models in accordance with Citi standards. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. What do we look for: If you are a bright and talented individual looking for a career in AI and Machine Learning with a focus on Generative AI , Citi has amazing opportunities for you. Bachelor’s Degree with atleast 3 years of experience in data analytics, or Master’s Degree with 2 years of experience in data analytics, or PhD. Technical Skills Hands-on experience in PySpark/Python/R programing along with strong experience in SQL. 2-4 years of experience working on deep learning, and generative AI applications Experience working on Transformers/ LLMs (OpenAI, Claude, Gemini etc.,), Prompt engineering, RAG based architectures and relevant tools/frameworks such as TensorFlow, PyTorch, Hugging Face Transformers, LangChain, LlamaIndex etc., Solid understanding of deep learning, transformers/language models. Familiarity with vector databases and fine-tuning techniques Experience working with large and multiple datasets, data warehouses and ability to pull data using relevant programs and coding. Strong background in Statistical Analysis. Capability to validate/maintain deployed models in production Self-motivated and able to implement innovative solutions at fast pace Experience in Credit Cards and Retail Banking is preferred Competencies Strong communication skills Multiple stake holder management Strong analytical and problem solving skills Excellent written and oral communication skills Strong team player Control orientated and Risk awareness Working experience in a quantitative field Willing to learn and can-do attitude Ability to build partnerships with cross-function leaders Education: Bachelor's / master’s degree in economics / Statistics / Mathematics / Information Technology / Computer Applications / Engineering etc. from a premier institute Other Details Employment: Full Time Industry: Credit Cards, Retail Banking, Financial Services, Banking ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Specialized Analytics (Data Science/Computational Statistics) ------------------------------------------------------ Time Type: ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 2 days ago
6.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Roles and responsibilities: Design and implement data pipelines for supply chain data (e.g., inventory, shipping, procurement). Develop and maintain data warehouses and data lakes. Ensure data quality, integrity, and security. Collaborate with supply chain stakeholders to identify analytics requirements. Develop data models and algorithms for predictive analytics (e.g., demand forecasting, supply chain optimization). Implement data visualization tools (e.g., Tableau, Power BI). Integrate data from various sources (e.g., ERP, PLMs, other data sources). Develop APIs for data exchange. Work with cross-functional teams (e.g., supply chain, logistics, IT). Communicate technical concepts to non-technical stakeholders. Experience with machine learning algorithms & concepts Knowledge of data governance and compliance. Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work in a fast-paced environment. Technical Skills: Bachelor's degree in Computer Science, Information Technology, or related field. 6-8 years of experience in data engineering. Proficiency in: Programming languages - Python, Java, SQL, Spark SQL. Data technologies - Hadoop, PySpark, NoSQL databases. Data visualization tools - Qliksense, Tableau, Power BI Cloud platforms - Azure Data Factory, Azure Databricks, AWS
Posted 2 days ago
2.0 - 10.0 years
0 Lacs
India
Remote
Pay Range: ₹400-500/hour Location: Remote (India) Mode: One-to-One Sessions Only (No batch teaching) We are hiring a Part-Time PySpark, Databricks Tutor who can deliver personalized, one-on-one online sessions to college and university-level students . The ideal candidate should have hands-on experience in big data technologies , particularly PySpark and Databricks , and should be comfortable teaching tools and techniques commonly used in the computer science and data engineering fields . Key Responsibilities: Deliver engaging one-to-one remote tutoring sessions focused on PySpark, Apache Spark, Databricks , and related tools. Teach practical use cases, project implementation techniques, and hands-on coding for real-world applications. Adapt teaching style based on individual student levels – beginners to advanced. Provide support with assignments, project work, and interview preparation. Ensure clarity in communication and foster an interactive learning environment. Required Skills & Qualifications: Experience: 2 to 10 years in the field of big data, data engineering, or related roles using PySpark and Databricks. Education: Bachelor’s or Master’s degree in Computer Science, Data Science, or relevant field. Strong English communication skills – both verbal and written. Familiarity with Spark SQL, Delta Lake, notebooks, and data pipelines. Ability to teach technical concepts with simplicity and clarity. Job Requirements: Freshers with strong knowledge and teaching ability may also apply. Must have a personal laptop and stable Wi-Fi connection . Must be serious and committed to long-term part-time work. Candidates who have applied before should not reapply . 💡 Note: This is a remote, part-time opportunity , and sessions will be conducted one-to-one , not in batch format. This role is ideal for professionals, freelancers, or educators passionate about sharing knowledge. 📩 Apply now only if you agree with the pay rate (₹400-500/hr) and meet the listed criteria. Let’s inspire the next generation of data engineers!
Posted 2 days ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Data Engineer, Chennai We’re seeking a highly motivated Data Engineer to join our agile, cross-functional team and drive end-to-end data pipeline development in a cloud-native, big data ecosystem. You’ll leverage ETL/ELT best practices and data lakehouse paradigms to deliver scalable solutions. Proficiency in SQL, Python, Spark, and modern data orchestration tools (e.g. Airflow) is essential, along with experience in CI/CD, DevOps, and containerized environments like Docker and Kubernetes. This is your opportunity to make an impact in a fast-paced, data-driven culture. Responsibilities Responsible for data pipeline development and maintenance Contribute to development, maintenance, testing strategy, design discussions, and operations of the team Participate in all aspects of agile software development including design, implementation, and deployment Responsible for the end-to-end lifecycle of new product features / components Ensuring application performance, uptime, and scale, maintaining high standards of code quality and thoughtful application design Work with a small, cross-functional team on products and features to drive growth Learning new tools, languages, workflows, and philosophies to grow Research and suggest new technologies for boosting the product Have an impact on product development by making important technical decisions, influencing the system architecture, development practices and more Qualifications Excellent team player with strong communication skills B.Sc. in Computer Sciences or similar 3-5 years of experience in Data Pipeline development 3-5 years of experience in PySpark / Databricks 3-5 years of experience in Python / Airflow Knowledge of OOP and design patterns Knowledge of server-side technologies such as Java, Spring Experience with Docker containers, Kubernetes and Cloud environments Expertise in testing methodologies (Unit-testing, TDD, mocking) Fluent with large scale SQL databases Good problem-solving and analysis abilities Requirements - Advantage Experience with Azure cloud services Experience with Agile Development methodologies Experience with Git Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Data Engineer] What You Will Do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Ø Design, develop, and maintain data solutions for data generation, collection, and processing Ø Be a key team member that assists in design and development of the data pipeline Ø Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Ø Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ø Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Ø Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Ø Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Ø Implement data security and privacy measures to protect sensitive data Ø Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Ø Collaborate and communicate effectively with product teams Ø Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Ø Identify and resolve complex data-related challenges Ø Adhere to best practices for coding, testing, and designing reusable code/component Ø Explore new tools and technologies that will help to improve ETL platform performance Ø Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Basic Qualifications and Experience: Master's degree / Bachelor's degree and 5 to 9 years Computer Science, IT or related field experience Functional Skills: Must-Have Skills Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 days ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Amgen Amgen harnesses the best of biology and technology to fight the world's toughest diseases, and make people's lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what fs known today. About The Role Role Description: We are looking for an Associate Data Engineer with deep expertise in writing data pipelines to build scalable, high-performance data solutions. The ideal candidate will be responsible for developing, optimizing and maintaining complex data pipelines, integration frameworks, and metadata-driven architectures that enable seamless access and analytics. This role prefers deep understanding of the big data processing, distributed computing, data modeling, and governance frameworks to support self-service analytics, AI-driven insights, and enterprise-wide data management. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect From You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor’s degree and 2 to 4 years of Computer Science, IT or related field experience OR Diploma and 4 to 7 years of Computer Science, IT or related field experience Preferred Qualifications: Functional Skills: Must-Have Skills : Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), AWS, Redshift, Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools. Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores. Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Good-to-Have Skills: Experience with data modeling, performance tuning on relational and graph databases ( e.g. Marklogic, Allegrograph, Stardog, RDF Triplestore). Understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platform Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Professional Certifications : AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting As an Associate Data Engineer at Amgen, you will be involved in the development and maintenance of data infrastructure and solutions. You will collaborate with a team of data engineers to design and implement data pipelines, perform data analysis, and ensure data quality. Your strong technical skills, problem-solving abilities, and attention to detail will contribute to the effective management and utilization of data for insights and decision-making.
Posted 2 days ago
15.0 years
0 Lacs
India
On-site
Job Summary As part of the data leadership team, the Capability Lead – Databricks will be responsible for building, scaling, and delivering Databricks-based data and AI capabilities across the organization. This leadership role involves technical vision, solution architecture, team building, partnership development, and delivery excellence using Databricks Unified Analytics Platform across industries. The individual will collaborate with clients, alliance partners (Databricks, Azure, AWS), internal stakeholders, and sales teams to drive adoption of lakehouse architectures, data engineering best practices, and AI/ML modernization. Areas of Responsibility 1. Offering and Capability Development: Develop and enhance Snowflake-based data platform offerings and accelerators Define best practices, architectural standards, and reusable frameworks for Snowflake Collaborate with alliance teams to strengthen partnership with Snowflake 2. Technical Leadership: Provide architectural guidance for Snowflake solution design and implementation Lead solutioning efforts for proposals, RFIs, and RFPs involving Snowflake Conduct technical reviews and ensure adherence to design standards. Act as a technical escalation point for complex project challenges 3. Delivery Oversight: Support delivery teams with technical expertise across Snowflake projects Drive quality assurance, performance optimization, and project risk mitigation. Review project artifacts and ensure alignment with Snowflake best practices Foster a culture of continuous improvement and delivery excellence 4. Talent Development: Build and grow a high-performing Snowflake capability team. Define skill development pathways and certification goals for team members. Mentor architects, developers, and consultants on Snowflake technologies Drive community of practice initiatives to share knowledge and innovations 5. Business Development Support: Engage with sales and pre-sales teams to position Snowflake capabilities Contribute to account growth by identifying new Snowflake opportunities Participate in client presentations, workshops, and technical discussions 6. Thought Leadership and Innovation Build thought leadership through whitepapers, blogs, and webinars Stay updated with Snowflake product enhancements and industry trends This role is highly collaborative and will work extremely closely with cross functional teams to fulfill the above responsibilities. Job Requirements: 12–15 years of experience in data engineering, analytics, and AI/ML 3–5 years of strong hands-on experience with Databricks (on Azure, AWS, or GCP) Expertise in Spark (PySpark/Scala), Delta Lake, Unity Catalog, MLflow, and Databricks notebooks Experience designing and implementing Lakehouse architectures at scale Familiarity with data governance, security, and compliance frameworks (GDPR, HIPAA, etc.) Experience with real-time and batch data pipelines (Structured Streaming, Auto Loader, Kafka, etc.) Strong understanding of MLOps and AI/ML lifecycle management Certifications in Databricks (e.g., Databricks Certified Data Engineer Professional, ML Engineer Associate) are preferred Experience with hyperscaler ecosystems (Azure Data Lake, AWS S3, GCP GCS, ADF, Glue, etc.) Experience managing large, distributed teams and working with CXO-level stakeholders Strong problem-solving, analytical, and decision-making skills Excellent verbal, written, and client-facing communication
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title Data Scientist - Deputy Manager Job Description Job title: Data Scientist - Deputy Manager Your role: Implements solutions to problems using data analysis, data mining, optimization tools and machine learning techniques and statistics Build data-science and technology based algorithmic solutions to address business needs Design large scale models using Regression, Linear Models Family, Time-series models. Drive the collection of new data and the refinement of existing data sources Analyze and interpret the results of analytics experiments Applies a global approach to analytical solutions-both within a business area and across the enterprise Ability to use data for Exploratory, descriptive, Inferential, Prescriptive, and Advanced Analytics Ability to share dashboards, reports, and Analytical insights from data Experience of having done visualization on large datasets – Preferred – added advantage Technical Knowledge and Skills required Experience solving analytical problems using quantitative approaches Passion for empirical research and for answering hard questions with data Ability to manipulate and analyze complex, high-volume, high-dimensionality data from varying sources Ability to apply a flexible analytic approach that allows for results at varying levels of precision Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner Expert knowledge of an analysis tool such as Pyspark and Python. Experience working with large data sets, experience working with distributed computing tools a plus (Map/Reduce, Hadoop, Hive, etc.) Familiarity with relational databases and SQL You're the right fit if: (4 x bullets max) 5 - 8 years of experience with engineering or equivalent background Experience with solving analytical problems using quantitative approaches Ability to manipulate and analyze complex, high-volume, high-dimensionality data from varying sources Ability to apply a flexible analytic approach that allows for results at varying levels of precision Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner Expert knowledge of an analysis tool such as R, Python Experience working with large data sets, experience working with distributed computing tools a plus (Map/Reduce, Hadoop, Hive, etc.) Familiarity with relational databases and SQL How We Work Together We believe that we are better together than apart. For our office-based teams, this means working in-person at least 3 days per week. Onsite roles require full-time presence in the company’s facilities. Field roles are most effectively done outside of the company’s main facilities, generally at the customers’ or suppliers’ locations. Indicate if this role is an office/field/onsite role. About Philips We are a health technology company. We built our entire company around the belief that every human matters, and we won't stop until everybody everywhere has access to the quality healthcare that we all deserve. Do the work of your life to help the lives of others. Learn more about our business. Discover our rich and exciting history. Learn more about our purpose. If you’re interested in this role and have many, but not all, of the experiences needed, we encourage you to apply. You may still be the right candidate for this or other opportunities at Philips. Learn more about our culture of impact with care here.
Posted 3 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary 8 10 years of experience as an Azure Data Engineer with expertise in Databricks and Azure Data Factory. Programming expertise in SQL, Spark and Python is mandatory 2+ years of experience with medical claims in healthcare and/or managed care is required Expertise in developing ETL/ELT pipelines for BI/ data visualization. Familiarity with normalized, dimensional, star schema and snowflake schematic models are mandatory Prior experience using version control to manage code changes
Posted 3 days ago
3.0 - 4.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. 3-4 years of hands-on experience in data engineering, with a strong focus on AWS cloud services. Proficiency in Python for data manipulation, scripting, and automation. Strong command of SQL for data querying, transformation, and database management. Demonstrable Experience With AWS Data Services, Including Amazon S3: Data Lake storage and management. AWS Glue: ETL service for data preparation. Amazon Redshift: Cloud data warehousing. AWS Lambda: Serverless computing for data processing. Amazon EMR: Managed Hadoop framework for big data processing (Spark/PySpark experience highly preferred). AWS Kinesis (or Kafka): Real-time data streaming. Strong analytical, problem-solving, and debugging skills. Excellent communication and collaboration abilities, with the capacity to work effectively in an agile team environment. Responsibilities Troubleshoot and resolve data-related issues and performance bottlenecks in existing pipelines. Develop and maintain data quality checks, monitoring, and alerting mechanisms to ensure data pipeline reliability. Participate in code reviews, contribute to architectural discussions, and promote best practices in data engineering.
Posted 3 days ago
2.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Find your future at United! We’re reinventing what our industry looks like, and what an airline can be – from the planes we fly to the people who fly them. When you join us, you’re joining a global team of 100,000+ connected by a shared passion with a wide spectrum of experience and skills to lead the way forward. Achieving our ambitions starts with supporting yours. Evolve your career and find your next opportunity. Get the care you need with industry-leading health plans and best-in-class programs to support your emotional, physical, and financial wellness. Expand your horizons with travel across the world’s biggest route network. Connect outside your team through employee-led Business Resource Groups. Create what’s next with us. Let’s define tomorrow together. Job Overview And Responsibilities Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial and operational projects with a digital focus. Data Engineer will be responsible to partner with various teams to define and execute data acquisition, transformation, processing and make data actionable for operational and analytics initiatives that create sustainable revenue and share growth Design, develop, and implement streaming and near-real time data pipelines that feed systems that are the operational backbone of our business Execute unit tests and validating expected results to ensure accuracy & integrity of data and applications through analysis, coding, writing clear documentation and problem resolution This role will also drive the adoption of data processing and analysis within the Hadoop environment and help cross train other members of the team Leverage strategic and analytical skills to understand and solve customer and business centric questions Coordinate and guide cross-functional projects that involve team members across all areas of the enterprise, vendors, external agencies and partners Leverage data from a variety of sources to develop data marts and insights that provide a comprehensive understanding of the business Develop and implement innovative solutions leading to automation Use of Agile methodologies to manage projects Mentor and train junior engineers This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. Qualifications What’s needed to succeed (Minimum Qualifications): BS/BA, in computer science or related STEM field 2+ years of IT experience in software development 2+ years of development experience using Java, Python, Scala 2+ years of experience with Big Data technologies like PySpark, Hadoop, Hive, HBASE, Kafka, Nifi 2+ years of experience with relational database systems like MS SQL Server, Oracle, Teradata Creative, driven, detail-oriented individuals who enjoy tackling tough problems with data and insights Individuals who have a natural curiosity and desire to solve problems are encouraged to apply Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Masters in computer science or related STEM field Experience with cloud based systems like AWS, AZURE or Google Cloud Certified Developer / Architect on AWS Strong experience with continuous integration & delivery using Agile methodologies Data engineering experience with transportation/airline industry Strong problem-solving skills Strong knowledge in Big Data
Posted 3 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Data Scientist || 8 Years || Gurgaon Primary skills: • Solid experience in building ML Models. • Proficient in SQL, Python, PySpark, Spark ML language. • Good understanding of cloud platforms such as AWS (preferred), Azure or GCP. • Proficient in source code controls using Github. Secondary skills: • Experience using any Auto ML products like DataRobot / H2O AI. • Provide inputs to build Artificial Intelligence (AI) roadmap for marketing based on TE&O architecture and capability delivery timelines. • Accountable for identifying, embedding, promoting, and ensuring continuous improvement within the use of new data and advanced analytics across the teams
Posted 3 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description Description - External United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job Overview And Responsibilities This role will be responsible for collaborating with the Business and IT teams to identify the value, scope, features and delivery roadmap for data engineering products and solutions. Responsible for communicating with stakeholders across the board, including customers, business managers, and the development team to make sure the goals are clear and the vision is aligned with business objectives. Perform data analysis using SQL Data Quality Analysis, Data Profiling and Summary reports Trend Analysis and Dashboard Creation based on Visualization technique Execute the assigned projects/ analysis as per the agreed timelines and with accuracy and quality. Complete analysis as required and document results and formally present findings to management Perform ETL workflow analysis, create current/future state data flow diagrams and help the team assess the business impact of any changes or enhancements Understand the existing Python code work books and write pseudo codes Collaborate with key stakeholders to identify the business case/value and create documentation. Should have excellent communication and analytical skills. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law. Qualifications - External Required BE, BTECH or equivalent, in computer science or related STEM field 5+ years of total IT experience as either a Data Analyst/Business Data Analyst or as a Data Engineer 2+ years of experience with Big Data technologies like PySpark, Hadoop, Redshift etc. 3+ years of experience with writing SQL queries on RDBMS or Cloud based database Experience with Visualization tools such as Spotfire, PowerBI, Quicksight etc Experience in Data Analysis and Requirements Gathering Strong problem-solving skills Creative, driven, detail-oriented focus, requiring tackling of tough problems with data and insights. Natural curiosity and desire to solve problems. Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Qualifications Preferred AWS Certification preferred Strong experience with continuous integration & delivery using Agile methodologies Data engineering experience with transportation/airline industry
Posted 3 days ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description As an airline, safety is our most important principle. And our Corporate Safety team is responsible for making sure safety is top of mind in every action we take. From conducting flight safety investigations and educating pilots on potential safety threats to implementing medical programs and helping prevent employee injuries, our team is instrumental in running a safe and successful airline for our customers and employees. Job Overview And Responsibilities Corporate safety is integral for ensuring a safe workplace for our employees and travel experience for our customers. This role is responsible for supporting the development and implementation of a cohesive safety data strategy and supporting the Director of Safety Management Systems (SMS) in growing United’s Corporate Safety Predictive Analytics capabilities. This Senior Analyst will serve as a subject matter expert for corporate safety data analytics and predictive insight strategy and execution. This position will be responsible for supporting new efforts to deliver insightful data analysis and build new key metrics for use by the entire United Safety organization, with the goal of enabling data driven decision making and understanding for corporate safety. The Senior Analyst will be responsible for becoming the subject matter expert in several corporate safety specific data streams and leveraging this expertise to deliver insights which are actionable and allow for a predictive approach to safety risk mitigation. Develop and implement predictive/prescriptive data analytics workflows for Safety Data Management and streamlining processes Collaborate with Digital Technology and United Operational teams to analyze, predict and reduce safety risks and provide measurable solutions Partner with Digital Technology team to develop streamlined and comprehensive data analytics workstreams Support United’s Safety Management System (SMS) with predictive data analytics by designing and developing statistical models Manage and maintain the project portfolio of SMS data team Areas of focus will include, but are not limited to: Predictive and prescriptive analytics Train and validate models Creation and maintenance of standardized corporate safety performance metrics Design and implementation of new data pipelines Delivery of prescriptive analysis insights to internal stakeholders Design and maintain new and existing corporate safety data pipelines and analytical workflows Create and manage new methods for data analysis which provide prescriptive and predictive insights on corporate safety data Partner with US and India based internal partners to establish new data analysis workflows and provide analytical support to corporate and divisional work groups Collaborate with corporate and divisional safety partners to ensure standardization and consistency between all safety analytics efforts enterprise wide Provide support and ongoing subject matter expertise regarding a set of high priority corporate safety datasets and ongoing analytics efforts on those datasets Provide tracking and status update reporting on ongoing assignments, projects, and efforts to US and India based leaders This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree Bachelor's degree in computer science, data science, information sytems, engineering, or another quantitative field (i.e. mathematics, statistics, economics, etc.) 4+ years experience in data analytics, predictive modeling, or statistics Expert level SQL skills Experience with Microsoft SQL Server Management Studio and hands-on experience working with massive data sets Proficiency writing complex code using both traditional and modern technologies/languages (i.e. Python, HTML, Javascript, Power Automate, Spark Node, etc.) for queries, procedures, and analytic processing to create useable data insight Ability to study/understand business needs, then design a data/technology solution that connects business processes with quantifiable outcomes Strong project management and communication skills 3-4 years working with complex data (data analytics, information science, data visualization or other relevant quantitative field Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master's degree ML / AI experience Experience with PySpark, Apache, or Hadoop to deal with massive data sets
Posted 3 days ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Position Summary: This role is accountable for running day-to-day operations of the Data Platform in Azure / AWS Databricks.The role involves designing and implementing data ingestion pipelines from multiple sources using Azure Databricks, ensuring seamless and efficient pipeline executions, and adhering to security, regulatory, and audit control guidelines. Key Responsibilities: ● Design and implement data ingestion pipelines from multiple sources using Azure Databricks. ● Ensure data pipelines run smoothly and efficiently with minimal downtime. ● Develop scalable and reusable frameworks for ingesting large and complex datasets. ● Integrate end-to-end data pipelines, ensuring quality and consistency from source systems to target repositories. ● Work with event-based and streaming technologies to ingest and process data in real-time. ● Collaborate with other project team members to deliver additional components such as API interfaces and search functionalities. ● Evaluate the performance and applicability of various tools against customer requirements and provide recommendations. ● Provide technical advice to the team and assist in issue resolution, leveraging strong Cloud and Databricks knowledge. ● Provide on-call, after-hours, and weekend support as needed to maintain platform stability. ● Fulfil service requests related to the Data Analytics platform efficiently. ● Lead and drive optimisation and continuous improvement initiatives within the team. ● Conduct technical reviews of changes as part of release management, acting as a gatekeeper for production deployments. ● Adhere to data security standards and implement required controls within the platform. ● Lead the design, development, and deployment of advanced data pipelines and analytical workflows on the Databricks Lakehouse platform. ● Collaborate with data scientists, engineers, and business stakeholders to build and scale end-to-end data solutions. ● Own architectural decisions to ensure alignment with data governance, security, and compliance requirements. ● Mentor and guide a team of data engineers, providing technical leadership and supporting career development. ● Implement CI/CD practices for data engineering pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins. Qualifications and Experience: ● Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics, or equivalent ● Minimum of 7+ years of experience in the data analytics field. ● Proven experience with Azure/AWS Databricks in building and optimising data pipelines, ● architectures, and datasets. ● Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. ● Ability to troubleshoot and optimize complex queries on the Spark platform. ● Knowledge of structured and unstructured data design, modelling, access, and storage techniques. ● Experience designing and deploying data applications on cloud platforms such as Azure or AWS. ● Hands-on experience in performance tuning and optimising code running in Databricks environment ● Strong analytical and problem-solving skills, particularly within Big Data environments. ● Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: ● Excellent communication skills with the ability to interact directly with customers. ● Azure/AWS Databricks. ● Python / Scala / Spark / PySpark. ● Strong SQL and RDBMS expertise. ● HIVE / HBase / Impala / Parquet. ● Sqoop, Kafka, Flume. ● Airflow. ● Jenkins or Bamboo. ● Github or Bitbucket. ● Nexus. Good to Have: ● Relevant accredited certifications for Azure, AWS, Cloud Engineering, and/or Databricks. ● Knowledge of Delta Live Tables (DLT).
Posted 3 days ago
7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Software Engineer – Senior (Full Stack Backend – Java) Location: Chennai (Onsite) Employment Type: Contract Budget: Up to ₹22 LPA 34347 Assessment: Full Stack Backend – Java (via HackerRank or equivalent platform) Notice Period: Immediate Joiners Preferred Role Overview We are seeking a highly skilled Senior Software Engineer with expertise in backend development, microservices architecture, and cloud-native technologies. The selected candidate will be part of a collaborative product team responsible for developing and deploying REST APIs and microservices for digital platforms. The role involves working in a fast-paced agile environment, contributing to both engineering excellence and product innovation. Key Responsibilities Design, develop, test, and deploy high-quality, scalable backend systems and APIs. Collaborate with cross-functional teams including product managers, designers, and QA engineers to deliver customer-centric solutions. Write clean, maintainable, and well-documented code following industry best practices. Participate in pair programming, code reviews, and test-driven development. Contribute to defining architecture and service-level objectives. Conduct proof-of-concepts for new capabilities and features. Drive continuous improvement in code quality, testing, and deployment processes. Required Skills 7+ years of hands-on experience in software engineering with a focus on backend development or full-stack engineering. Strong expertise in Java and microservices architecture. Solid understanding and working knowledge of: Google Cloud Platform (GCP) services including BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL, and Airflow. Infrastructure as Code (IaC) tools like Terraform. CI/CD tools such as Tekton. Databases: PostgreSQL, Cloud SQL. Programming/scripting: Python, PySpark. Building and consuming RESTful APIs. Preferred Qualifications Experience with containerization and orchestration tools. Familiarity with monitoring tools and service-level indicators (SLIs/SLAs). Exposure to agile frameworks like Extreme Programming (XP), Scrum, or Kanban. Education Required: Bachelor's degree in Computer Science, Engineering, or a related technical discipline. Skills: restful apis,pyspark,tekton,data fusion,bigquery,cloud sql,microservices architecture,microservices,software,terraform,postgresql,dataflow,code,cloud,dataproc,google cloud platform (gcp),ci/cd,airflow,python,java
Posted 3 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
We are seeking a highly skilled and motivated Python , AWS, Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.
Posted 3 days ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Lead Data Engineer QA Rank – Manager Location – Bengaluru/Chennai/Kerela/Kolkata Objectives and Purpose The Lead Data Engineer QA will be responsible for testing business intelligence and data warehouse solutions, both in on-premises and cloud platforms. We are seeking an innovative and talented individual who can create test plans, protocols, and procedures for new software. In addition, you will be supporting build of large-scale data architectures that provide information to downstream systems and business users. Your Key Responsibilities Design and execute manual and automatic test cases, including validating alignment with ELT data integrity and compliance. Support conducting QA test case designs, including identifying opportunities for test automation and developing scripts for automatic processes as needed. Follow quality standards, conduct continuous monitoring and improvement, and manage test cases, test data, and defect processes using a risk-based approach as needed. Ensure all software releases meet regulatory standards, including requirements for validation, documentation, and traceability, with particular emphasis on data privacy and adherence to infrastructure security best practices. Proactively foster strong partnerships across teams and stakeholders to ensure alignment with quality requirements and address any challenges. Implement observability within testing processes to proactively identify, track, and resolve quality issues, contributing to sustained high-quality performance. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality Work closely with product team to monitor data quality, integrity, and security throughout the product lifecycle, implementing data quality checks to ensure accuracy, completeness, and consistency. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Warehousing, or related field 10+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Understanding of project and test lifecycle, including exposure to CMMi and process improvement frameworks Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Understanding of business intelligence concepts, ETL processing, dashboards, and analytics Testing experience in Data Quality, ETL, OLAP, or Reports Knowledge in Data Transformation Projects, including database design concepts & white box testing Experience in cloud based data solution – AWS/Azure Demonstrated understanding and experience using: Cloud-based data solutions (AWS, IICS, Databricks) GXP and regulatory and risk compliance Cloud AWS infrastructure testing Python data processing SQL scripting Test processes (e.g., ELT testing, SDLC) Power BI/Tableau Script (e.g., perl and shell) Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Test Management and Defect Management tools (e.g., HP ALM) Cloud platform deployment and tools (e.g., Kubernetes) DevOps and continuous integration Databricks/ETL Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
6.0 years
6 - 9 Lacs
Hyderābād
On-site
CACI International Inc is an American multinational professional services and information technology company headquartered in Northern Virginia. CACI provides expertise and technology to enterprise and mission customers in support of national security missions and government transformation for defense, intelligence, and civilian customers. CACI has approximately 23,000 employees worldwide. Headquartered in London, CACI Ltd is a wholly owned subsidiary of CACI International Inc., a publicly listed company on the NYSE with annual revenue in excess of US $6.2bn. Founded in 2022, CACI India is an exciting, growing and progressive business unit of CACI Ltd. CACI Ltd currently has over 2000 intelligent professionals and are now adding many more from our Hyderabad and Pune offices. Through a rigorous emphasis on quality, the CACI India has grown considerably to become one of the UKs most well-respected Technology centres. About Data Platform: The Data Platform will be built and managed “as a Product” to support a Data Mesh organization. The Data Platform focusses on enabling decentralized management, processing, analysis and delivery of data, while enforcing corporate wide federated governance on data, and project environments across business domains. The goal is to empower multiple teams to create and manage high integrity data and data products that are analytics and AI ready, and consumed internally and externally. What does a Data Infrastructure Engineer do? A Data Infrastructure Engineer will be responsible to develop, maintain and monitor the data platform infrastructure and operations. The infrastructure and pipelines you build will support data processing, data analytics, data science and data management across the CACI business. The data platform infrastructure will conform to a zero trust, least privilege architecture, with a strict adherence to data and infrastructure governance and control in a multi-account, multi-region AWS environment. You will use Infrastructure as Code and CI/CD to continuously improve, evolve and repair the platform. You will be able to design architectures and create re-useable solutions to reflect the business needs. Responsibilities will include: Collaborating across CACI departments to develop and maintain the data platform Building infrastructure and data architectures in Cloud Formation, and SAM. Designing and implementing data processing environments and integrations using AWS PaaS such as Glue, EMR, Sagemaker, Redshift, Aurora and Snowflake Building data processing and analytics pipelines as code, using python, SQL, PySpark, spark, CloudFormation, lambda, step functions, Apache Airflow Monitoring and reporting on the data platform performance, usage and security Designing and applying security and access control architectures to secure sensitive data You will have: 6+ years of experience in a Data Engineering role. Strong experience and knowledge of data architectures implemented in AWS using native AWS services such as S3, DataZone, Glue, EMR, Sagemaker, Aurora and Redshift. Experience administrating databases and data platforms Good coding discipline in terms of style, structure, versioning, documentation and unit tests Strong proficiency in Cloud Formation, Python and SQL Knowledge and experience of relational databases such as Postgres, Redshift Experience using Git for code versioning, and lifecycle management Experience operating to Agile principles and ceremonies Hands-on experience with CI/CD tools such as GitLab Strong problem-solving skills and ability to work independently or in a team environment. Excellent communication and collaboration skills. A keen eye for detail, and a passion for accuracy and correctness in numbers Whilst not essential, the following skills would also be useful: Experience using Jira, or other agile project management and issue tracking software Experience with Snowflake Experience with Spatial Data Processing
Posted 3 days ago
5.0 years
3 - 7 Lacs
Hyderābād
On-site
Company Profile : LSEG (London Stock Exchange Group) is a world-leading financial markets infrastructure and data business. We are dedicated, open-access partners with a dedication to perfection in delivering services across Data & Analytics, Capital Markets, and Post Trade. Backed by three hundred years of experience, innovative technologies, and a team of over 23,000 people in 70 countries, our purpose is driving financial stability, empowering economies, and enabling customers to create sustainable growth. Working in partnership with Tata Consultancy Services (TCS), we are excited to expand our tech centres of perfection in India, by building a new global center, right here in the heart of Hyderabad. Role Profile : As a Sr AWS Developer, you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment, and support of application developed for our clients. As a member working in a team environment, you will work with solution architects and developers on interpretation/translation of wireframes and creative designs into functional requirements, and subsequently into technical design. Key Responsibilities : 5+ years experience in design, build, and maintain robust, scalable and efficient ETL pipelines using Python and Spark. Develop workflows demonstrating AWS services such as Glue, Glue Data Catalog, Lambda, S3 and EMR Serverless, API Gateway. Implement data quality frameworks and governance practices to ensure reliable data processing. Optimize existing workflows and drive transformation of the data from multiple sources. Monitor system performance and ensure data reliability through proactive optimizations. Chip in to technical discussions, and deliver high-quality solutions. Essential/ Must-Have Skills : Hands-on experience with AWS Glue, Glue Data Catalog, Lambda, S3, and EMR, EMR Serverless, API Gateway, SNS, SQS, CloudWatch, CloudFormation and CloudFront. Strong understanding of data quality frameworks, governance practices and scalable architectures. Practical knowledge of integration of different data sources and transformation of it. Agile methodology experience, including sprint planning and retrospectives. Excellent interpersonal skills for articulating technical solutions to diverse team members. Experience in additional programming languages such as Java, Node. JS. Experience with Java, Terraform, Ansible, Python, pyspark etc. Knowledge of Tools such as Kafka, Datadog, GitLab, Jenkins, Docker, Kubernetes. Desirable Skills : Nice to have AWS Certified Developer or AWS Certified Solutions Architect. Experience with serverless computing paradigms. LSEG is a leading global financial markets infrastructure and data provider. Our purpose is driving financial stability, empowering economies and enabling customers to create sustainable growth. Our purpose is the foundation on which our culture is built. Our values of Integrity, Partnership, Excellence and Change underpin our purpose and set the standard for everything we do, every day. They go to the heart of who we are and guide our decision making and everyday actions. Working with us means that you will be part of a dynamic organisation of 25,000 people across 65 countries. However, we will value your individuality and enable you to bring your true self to work so you can help enrich our diverse workforce. You will be part of a collaborative and creative culture where we encourage new ideas and are committed to sustainability across our global business. You will experience the critical role we have in helping to re-engineer the financial ecosystem to support and drive sustainable economic growth. Together, we are aiming to achieve this growth by accelerating the just transition to net zero, enabling growth of the green economy and creating inclusive economic opportunity. LSEG offers a range of tailored benefits and support, including healthcare, retirement planning, paid volunteering days and wellbeing initiatives. We are proud to be an equal opportunities employer. This means that we do not discriminate on the basis of anyone’s race, religion, colour, national origin, gender, sexual orientation, gender identity, gender expression, age, marital status, veteran status, pregnancy or disability, or any other basis protected under applicable law. Conforming with applicable law, we can reasonably accommodate applicants' and employees' religious practices and beliefs, as well as mental health or physical disability needs. Please take a moment to read this privacy notice carefully, as it describes what personal information London Stock Exchange Group (LSEG) (we) may hold about you, what it’s used for, and how it’s obtained, your rights and how to contact us as a data subject . If you are submitting as a Recruitment Agency Partner, it is essential and your responsibility to ensure that candidates applying to LSEG are aware of this privacy notice.
Posted 3 days ago
3.0 years
6 - 8 Lacs
Hyderābād
Remote
Accellor is looking for a Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required. Design, develop, and maintain ETL pipelines using PySpark Notebooks and Microsoft Fabric. Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions. Migrate and integrate data from legacy SQL Server environments into modern data platforms. Optimize data pipelines and workflows for scalability, efficiency, and reliability. Provide technical leadership and mentorship to junior developers and other team members. Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. Develop, maintain, and enforce data engineering best practices, coding standards, and documentation. Conduct code reviews and provide constructive feedback to improve team productivity and code quality. Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Requirements Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. Experience with Microsoft Fabric or similar cloud-based data integration platforms is a must. Min 3 years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools. Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. Strong knowledge of data warehousing concepts, ETL frameworks , and big data processing. Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage. Experience working with both structured and unstructured data sources. Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. Proven ability to work independently, as part of a team, and in leadership roles. Strong communication skills with the ability to translate complex technical concepts into business terms. Mandatory skills Experience with Data lake, Data warehouse, Delta lake Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. Knowledge of scripting languages (e.g. , Python, Scala) for data manipulation and automation. Familiarity with DevOps practices, CI/CD pipelines, and containerization (Docker, Kubernetes) is a plus. Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global canters. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.
Posted 3 days ago
5.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Data Engineer] What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Ø Design, develop, and maintain data solutions for data generation, collection, and processing Ø Be a key team member that assists in design and development of the data pipeline Ø Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Ø Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ø Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Ø Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Ø Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Ø Implement data security and privacy measures to protect sensitive data Ø Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Ø Collaborate and communicate effectively with product teams Ø Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Ø Identify and resolve complex data-related challenges Ø Adhere to best practices for coding, testing, and designing reusable code/component Ø Explore new tools and technologies that will help to improve ETL platform performance Ø Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Basic Qualifications and Experience: Master's degree / Bachelor's degree and 5 to 9 years Computer Science, IT or related field experience Functional Skills: Must-Have Skills Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 3 days ago
15.0 years
0 Lacs
Bhubaneshwar
On-site
Project Role : Custom Software Engineer Project Role Description : Develop custom software solutions to design, code, and enhance components across systems or applications. Use modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Custom Software Engineer, you will develop custom software solutions to design, code, and enhance components across systems or applications. Your typical day will involve collaborating with cross-functional teams to understand business requirements, utilizing modern frameworks and agile practices to deliver scalable and high-performing solutions tailored to specific business needs. You will engage in problem-solving activities, ensuring that the software solutions meet the highest standards of quality and performance while adapting to evolving project requirements. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve software development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Strong understanding of data processing frameworks and distributed computing. - Experience with modern software development methodologies, particularly Agile. - Familiarity with cloud platforms and services for deploying applications. - Ability to troubleshoot and optimize performance in software applications. Additional Information: - The candidate should have minimum 5 years of experience in PySpark. - This position is based at our Bengaluru office. - A 15 years full time education is required. 15 years full time education
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough