Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 - 10.0 years
8 - 14 Lacs
Mumbai
Remote
Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in distributed environments (e.g., AWS, Databricks, Hadoop). - Document parsing logic, data lineage, and optimization strategies for maintainability. Qualifications : - 5+ years of hands-on experience with PySpark and Spark XML libraries (e.g., `spark-xml`) in production environments. - Proven track record of parsing XML data with 20+ levels of nesting using recursive methods and schema inference. - Expertise in XPath, XQuery, and DataFrame transformations (e.g., `explode`, `struct`, `selectExpr`) for hierarchical data. - Strong understanding of Spark optimization techniques: partitioning strategies, broadcast variables, and memory management. - Experience with distributed computing frameworks (e.g., Hadoop, YARN) and cloud platforms (AWS, Azure, GCP). - Familiarity with big data file formats (Parquet, Avro) and orchestration tools (Airflow, Luigi). - Bachelor's degree in Computer Science, Data Engineering, or a related field. Preferred Skills : - Experience with schema evolution and versioning for nested XML/JSON datasets. - Knowledge of Scala or Java for extending Spark XML libraries. - Exposure to Databricks, Delta Lake, or similar platforms. - Certifications in AWS/Azure big data technologies.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 14 Lacs
Jaipur
Remote
Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in distributed environments (e.g., AWS, Databricks, Hadoop). - Document parsing logic, data lineage, and optimization strategies for maintainability. Qualifications : - 5+ years of hands-on experience with PySpark and Spark XML libraries (e.g., `spark-xml`) in production environments. - Proven track record of parsing XML data with 20+ levels of nesting using recursive methods and schema inference. - Expertise in XPath, XQuery, and DataFrame transformations (e.g., `explode`, `struct`, `selectExpr`) for hierarchical data. - Strong understanding of Spark optimization techniques: partitioning strategies, broadcast variables, and memory management. - Experience with distributed computing frameworks (e.g., Hadoop, YARN) and cloud platforms (AWS, Azure, GCP). - Familiarity with big data file formats (Parquet, Avro) and orchestration tools (Airflow, Luigi). - Bachelor's degree in Computer Science, Data Engineering, or a related field. Preferred Skills : - Experience with schema evolution and versioning for nested XML/JSON datasets. - Knowledge of Scala or Java for extending Spark XML libraries. - Exposure to Databricks, Delta Lake, or similar platforms. - Certifications in AWS/Azure big data technologies.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 14 Lacs
Chennai
Work from Office
Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in distributed environments (e.g., AWS, Databricks, Hadoop). - Document parsing logic, data lineage, and optimization strategies for maintainability. Qualifications : - 5+ years of hands-on experience with PySpark and Spark XML libraries (e.g., `spark-xml`) in production environments. - Proven track record of parsing XML data with 20+ levels of nesting using recursive methods and schema inference. - Expertise in XPath, XQuery, and DataFrame transformations (e.g., `explode`, `struct`, `selectExpr`) for hierarchical data. - Strong understanding of Spark optimization techniques: partitioning strategies, broadcast variables, and memory management. - Experience with distributed computing frameworks (e.g., Hadoop, YARN) and cloud platforms (AWS, Azure, GCP). - Familiarity with big data file formats (Parquet, Avro) and orchestration tools (Airflow, Luigi). - Bachelor's degree in Computer Science, Data Engineering, or a related field. Preferred Skills : - Experience with schema evolution and versioning for nested XML/JSON datasets. - Knowledge of Scala or Java for extending Spark XML libraries. - Exposure to Databricks, Delta Lake, or similar platforms. - Certifications in AWS/Azure big data technologies.
Posted 2 weeks ago
5.0 - 10.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be involved in designing, building, and configuring applications to meet business process and application requirements. Your typical day will revolve around creating innovative solutions to address various business needs and ensuring seamless application functionality. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the development and implementation of complex applications- Conduct code reviews and provide technical guidance to team members- Stay updated on industry trends and best practices to enhance application development processes Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Strong understanding of distributed computing and data processing- Experience in building scalable and efficient data pipelines- Knowledge of cloud platforms such as AWS or Azure- Hands-on experience with data manipulation and transformation techniques Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark- This position is based at our Bengaluru office- A 15 years full-time education is required Qualification 15 years full time education
Posted 2 weeks ago
2.0 - 7.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Apache Spark Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will be responsible for overseeing the entire application development process and ensuring its successful implementation. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design, development, and implementation of applications.- Collaborate with cross-functional teams to gather and analyze requirements.- Ensure the applications meet quality standards and are delivered on time.- Provide technical guidance and mentorship to junior team members.- Stay updated with the latest industry trends and technologies.- Identify and resolve any issues or bottlenecks in the application development process. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Strong understanding of distributed computing principles.- Experience with big data processing frameworks like Hadoop or Apache Flink.- Knowledge of programming languages such as Java or Scala.- Hands-on experience with data processing and analysis using Spark SQL.- Good To Have Skills: Familiarity with cloud platforms like AWS or Azure. Additional Information:- The candidate should have a minimum of 2 years of experience in Apache Spark.- This position is based at our Chennai office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 2 weeks ago
5.0 - 10.0 years
8 - 14 Lacs
Kolkata
Work from Office
Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in distributed environments (e.g., AWS, Databricks, Hadoop). - Document parsing logic, data lineage, and optimization strategies for maintainability. Qualifications : - 5+ years of hands-on experience with PySpark and Spark XML libraries (e.g., `spark-xml`) in production environments. - Proven track record of parsing XML data with 20+ levels of nesting using recursive methods and schema inference. - Expertise in XPath, XQuery, and DataFrame transformations (e.g., `explode`, `struct`, `selectExpr`) for hierarchical data. - Strong understanding of Spark optimization techniques: partitioning strategies, broadcast variables, and memory management. - Experience with distributed computing frameworks (e.g., Hadoop, YARN) and cloud platforms (AWS, Azure, GCP). - Familiarity with big data file formats (Parquet, Avro) and orchestration tools (Airflow, Luigi). - Bachelor's degree in Computer Science, Data Engineering, or a related field. Preferred Skills : - Experience with schema evolution and versioning for nested XML/JSON datasets. - Knowledge of Scala or Java for extending Spark XML libraries. - Exposure to Databricks, Delta Lake, or similar platforms. - Certifications in AWS/Azure big data technologies.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 14 Lacs
Noida
Work from Office
Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in distributed environments (e.g., AWS, Databricks, Hadoop). - Document parsing logic, data lineage, and optimization strategies for maintainability. Qualifications : - 5+ years of hands-on experience with PySpark and Spark XML libraries (e.g., `spark-xml`) in production environments. - Proven track record of parsing XML data with 20+ levels of nesting using recursive methods and schema inference. - Expertise in XPath, XQuery, and DataFrame transformations (e.g., `explode`, `struct`, `selectExpr`) for hierarchical data. - Strong understanding of Spark optimization techniques: partitioning strategies, broadcast variables, and memory management. - Experience with distributed computing frameworks (e.g., Hadoop, YARN) and cloud platforms (AWS, Azure, GCP). - Familiarity with big data file formats (Parquet, Avro) and orchestration tools (Airflow, Luigi). - Bachelor's degree in Computer Science, Data Engineering, or a related field. Preferred Skills : - Experience with schema evolution and versioning for nested XML/JSON datasets. - Knowledge of Scala or Java for extending Spark XML libraries. - Exposure to Databricks, Delta Lake, or similar platforms. - Certifications in AWS/Azure big data technologies.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 14 Lacs
Ahmedabad
Work from Office
Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in distributed environments (e.g., AWS, Databricks, Hadoop). - Document parsing logic, data lineage, and optimization strategies for maintainability. Qualifications : - 5+ years of hands-on experience with PySpark and Spark XML libraries (e.g., `spark-xml`) in production environments. - Proven track record of parsing XML data with 20+ levels of nesting using recursive methods and schema inference. - Expertise in XPath, XQuery, and DataFrame transformations (e.g., `explode`, `struct`, `selectExpr`) for hierarchical data. - Strong understanding of Spark optimization techniques: partitioning strategies, broadcast variables, and memory management. - Experience with distributed computing frameworks (e.g., Hadoop, YARN) and cloud platforms (AWS, Azure, GCP). - Familiarity with big data file formats (Parquet, Avro) and orchestration tools (Airflow, Luigi). - Bachelor's degree in Computer Science, Data Engineering, or a related field. Preferred Skills : - Experience with schema evolution and versioning for nested XML/JSON datasets. - Knowledge of Scala or Java for extending Spark XML libraries. - Exposure to Databricks, Delta Lake, or similar platforms. - Certifications in AWS/Azure big data technologies.
Posted 2 weeks ago
5.0 - 10.0 years
8 - 14 Lacs
Pune
Work from Office
Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in distributed environments (e.g., AWS, Databricks, Hadoop). - Document parsing logic, data lineage, and optimization strategies for maintainability. Qualifications : - 5+ years of hands-on experience with PySpark and Spark XML libraries (e.g., `spark-xml`) in production environments. - Proven track record of parsing XML data with 20+ levels of nesting using recursive methods and schema inference. - Expertise in XPath, XQuery, and DataFrame transformations (e.g., `explode`, `struct`, `selectExpr`) for hierarchical data. - Strong understanding of Spark optimization techniques: partitioning strategies, broadcast variables, and memory management. - Experience with distributed computing frameworks (e.g., Hadoop, YARN) and cloud platforms (AWS, Azure, GCP). - Familiarity with big data file formats (Parquet, Avro) and orchestration tools (Airflow, Luigi). - Bachelor's degree in Computer Science, Data Engineering, or a related field. Preferred Skills : - Experience with schema evolution and versioning for nested XML/JSON datasets. - Knowledge of Scala or Java for extending Spark XML libraries. - Exposure to Databricks, Delta Lake, or similar platforms. - Certifications in AWS/Azure big data technologies.
Posted 2 weeks ago
1.0 - 5.0 years
9 - 13 Lacs
Bengaluru
Work from Office
We are looking for a skilled and experienced PySpark Tech Lead to join our dynamic engineering team In this role, you will lead the development and execution of high-performance big data solutions using PySpark You will work closely with data scientists, engineers, and architects to design and implement scalable data pipelines and analytics solutions. As a Tech Lead, you will mentor and guide a team of engineers, ensuring the adoption of best practices for building robust and efficient systems while driving innovation in the use of data technologies. Key Responsibilities Lead and DevelopDesign and implement scalable, high-performance data pipelines and ETL processes using PySpark on distributed systems Tech LeadershipProvide technical direction and leadership to a team of engineers, ensuring the delivery of high-quality solutions that meet both business and technical requirements. Architect SolutionsDevelop and enforce best practices for architecture, design, and coding standards Lead the design of complex data engineering workflows, ensuring they are optimized for performance and cost-effectiveness. CollaborationCollaborate with data scientists, analysts, and other stakeholders to understand data requirements, translating them into scalable technical solutions. Optimization & Performance TuningOptimize large-scale data processing pipelines to improve efficiency and performance Implement best practices for memory management, data partitioning, and parallelization in Spark. Code Review & MentorshipConduct code reviews to ensure high-quality code, maintainability, and scalability Provide guidance and mentorship to junior and mid-level engineers. Innovation & Best PracticesStay current on new data technologies and trends, bringing fresh ideas and solutions to the team Implement continuous integration and deployment pipelines for data workflows. Problem SolvingIdentify bottlenecks, troubleshoot, and resolve issues related to data quality, pipeline failures, and performance optimization. Skills And Qualifications Experience: 7+ years of hands-on experience in PySpark and large-scale data processing. Technical Expertise: Strong knowledge of PySpark, Spark SQL, and Apache Kafka. Experience with cloud platforms like AWS (EMR, S3), Google Cloud, or Azure. In-depth understanding of distributed computing, parallel processing, and data engineering principles. Data Engineering: Expertise in building ETL pipelines, data wrangling, and working with structured and unstructured data. Experience with databases (relational and NoSQL) such as SQL, MongoDB, or DynamoDB. Familiarity with data warehousing solutions and query optimization techniques Leadership & Communication: Proven ability to lead a technical team, make key architectural decisions, and mentor junior engineers. Excellent communication skills, with the ability to collaborate effectively with cross-functional teams and stakeholders. Problem Solving: Strong analytical skills with the ability to solve complex problems involving large datasets and distributed systems. Education: Bachelors or Masters degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Show more Show less
Posted 2 weeks ago
1.0 - 5.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Job TitleData Engineer Experience5"“8 Years LocationDelhi, Pune, Bangalore (Hyderabad & Chennai also acceptable) Time ZoneAligned with UK Time Zone Notice PeriodImmediate Joiners Only Role Overview: We are seeking experienced Data Engineers to design, develop, and optimize large-scale data processing systems You will play a key role in building scalable, efficient, and reliable data pipelines in a cloud-native environment, leveraging your expertise in GCP, BigQuery, Dataflow, Dataproc, and more Key Responsibilities: Design, build, and manage scalable and reliable data pipelines for real-time and batch processing. Implement robust data processing solutions using GCP services and open-source technologies. Create efficient data models and write high-performance analytics queries. Optimize pipelines for performance, scalability, and cost-efficiency. Collaborate with data scientists, analysts, and engineering teams to ensure smooth data integration and transformation. Maintain high data quality, enforce validation rules, and set up monitoring and alerting. Participate in code reviews, deployment activities, and provide production support. Technical Skills Required: Cloud PlatformsGCP (Google Cloud Platform)- mandatory Key GCP ServicesDataproc, BigQuery, Dataflow Programming LanguagesPython, Java, PySpark Data Engineering ConceptsData Ingestion, Change Data Capture (CDC), ETL/ELT pipeline design Strong understanding of distributed computing, data structures, and performance tuning Required Qualifications & Attributes: 5"“8 years of hands-on experience in data engineering roles Proficiency in building and optimizing distributed data pipelines Solid grasp of data governance and security best practices in cloud environments Strong analytical and problem-solving skills Effective verbal and written communication skills Proven ability to work independently and in cross-functional teams Show more Show less
Posted 2 weeks ago
3.0 - 7.0 years
11 - 15 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, develop, and maintain scalable data/code pipelines using Azure Databricks, Apache Spark, and Scala Collaborate with data engineers, data scientists, and business stakeholders to understand data requirements and deliver high-quality data solutions Optimize and tune Spark applications for performance and scalability Implement data processing workflows, ETL processes, and data integration solutions Ensure data quality, integrity, and security throughout the data lifecycle Troubleshoot and resolve issues related to data processing and pipeline failures Stay updated with the latest industry trends and best practices in big data technologies Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 6+ years Proven experience with Azure Databricks, Apache Spark, and Scala 6+ years experience with Microsoft Azure Experience with data warehousing solutions and ETL tools Solid understanding of distributed computing principles and big data processing Proficiency in writing complex SQL queries and working with relational databases Proven excellent problem-solving skills and attention to detail Solid communication and collaboration skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 weeks ago
2.0 - 4.0 years
4 - 6 Lacs
Mumbai, Hyderabad
Work from Office
Job Responsibilities. Collaborate with data scientists, software engineers, and business stakeholders to understand data requirements and design efficient data models.. Develop, implement, and maintain robust and scalable data pipelines, ETL processes, and data integration solutions.. Extract, transform, and load data from various sources, ensuring data quality, integrity, and consistency.. Optimize data processing and storage systems to handle large volumes of structured and unstructured data efficiently.. Perform data cleaning, normalization, and enrichment tasks to prepare datasets for analysis and modelling.. Monitor data flows and processes, identify and resolve data-related issues and bottlenecks.. Contribute to the continuous improvement of data engineering practices and standards within the organization.. Stay up-to-date with industry trends and emerging technologies in data engineering, artificial intelligence, and dynamic pricing. Candidate Profile. Strong passion for data engineering, artificial intelligence, and problem-solving.. Solid understanding of data engineering concepts, data modeling, and data integration techniques.. Proficiency in programming languages such as Python, SQL and Web Scrapping.. Understanding of databases like No Sql , relational database, In Memory database and technologies like MongoDB, Redis, Apache Spark would be add on... Knowledge of distributed computing frameworks and big data technologies (e.g., Hadoop, Spark) is a plus.. Excellent analytical and problem-solving skills, with a keen eye for detail.. Strong communication and collaboration skills, with the ability to work effectively in a teamoriented environment.. Self-motivated, quick learner, and adaptable to changing priorities and technologies.. (ref:hirist.tech).
Posted 3 weeks ago
2.0 - 6.0 years
4 - 8 Lacs
Chennai
Work from Office
Key Responsibilities : - Develop, deploy, and maintain scalable web applications using Python (Flask/Django). - Design and implement RESTful APIs with strong security and authentication mechanisms. - Work with MongoDB and other database management systems to store and query data efficiently. - Support and productize Machine Learning models, including feature engineering, training, tuning, and scoring. - Understand and apply distributed computing concepts to build high-performance systems. - Handle web hosting and deployment of applications, ensuring uptime and performance. - Collaborate with stakeholders to translate business requirements into technical solutions. - Communicate effectively with both technical and non-technical team members. - Take ownership of projects, troubleshoot production issues, and implement solutions proactively. Required Skills & Qualifications : - 3-5 years of experience in Python development, primarily with Flask (Django experience is a plus). - Solid knowledge of distributed systems and web architectures. - Hands-on experience with Machine Learning workflows and model deployment. - Experience with MongoDB and other database technologies. - Strong knowledge of RESTful API development and security best practices. - Excellent problem-solving skills and the ability to work independently. - Strong communication skills to clearly explain technical concepts to a diverse audience. Nice to Have : - Bachelor's or Master's degree in Computer Science, IT, or a related field. - Experience with data manipulation using Pandas, Spark, and handling large datasets. - Familiarity with Django framework in addition to Flask. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.
Posted 3 weeks ago
8.0 - 12.0 years
10 - 14 Lacs
Mumbai
Work from Office
AryaXAI stands at the forefront of AI innovation, revolutionizing AI for mission-critical businesses by building explainable, safe, and aligned systems that scale responsibly Our mission is to create AI tools that empower researchers, engineers, and organizations to unlock AI's full potential while maintaining transparency and safety, Our team thrives on a shared passion for cutting-edge innovation, collaboration, and a relentless drive for excellence At AryaXAI, everyone contributes hands-on to our mission in a flat organizational structure that values curiosity, initiative, and exceptional performance Qualification s:This is a full-time remote role for a Principal Engineering Manager at AryaXAI The Principal Engineering Manager will be responsible for engineering management, team leadership, software development, project management, and integratio 7+ years of experience in building and deploying enterprise-grade machine learning infra, SaaS software or data analytics solutions or online platfo rmsAcademic degree in Computer Science, Engineering, or a related fi eldStrong background in Python programming, Distributed computing, Engineering, big data processing, and cloud computing (AWS, Azure, GCP, On-Premi se)Experience in optimizing, scaling, and reliability of large-scale systems for various data modalities and AI mode Strong fundamentals and discipline in CI/CD pipelines, containerization (Docker, Kubernetes), and multi-cloud environmen Proven track record in product management in designing and delivering complex, high-performance solutions for large-scale enterprise customers without faults and ability to auto-scale the systems through delivering the essential SLAs in production during inferenci Programming skills expert in Python and solid coding practi cesData Engineering expert in MongoDB and strong fundamentals in data lakes, data storage, ETL pipelines, and building real-time data pipelin Strong material skills, excellent communication, collaboration, and leadership skills with the ability to lead teams, mentor juniors, and work with other tea Strong system architecture background that supports scaling in regulated industri Experience in scaling AI/ML production systems using classic ML models, LLMs, and Deep Learning models on traditional hardware and high-performance computi Roles and responsibilit ies: As Principal Engineer, you'll be the main SPOC for all the engineering efforts behind the AryaXAI platform, serving customers in SaaS and on-premise modes and scaling multiple AI models in startups, SMEs, and highly regulated indust ries, You'll be responsible for designing, modifying, and scaling architecture, as well as designing and implementing scalable infra for AI/ML solutions and data engine ering You'll be the SPOC with the R&D team collecting the productized components and adding and scaling these components on the pla tform Architect and scale AI inferencing platform and related components for all data modalities like Tabular, text, image, video , etc You'll be responsible for designing the AI Ops and AI Inferencing Ops model building, data preprocessing, model inferencing, explainability, monitoring, alignment, risk management compo nents You'll work on running multiple optimizations on inferencing speed and latencies to serve multiple AI models LLMs, DL models, classic ML models , etc You'll be the SPOC with the pre-sales and solutions team to provide necessary documentation and specs and collect the requirements for enterprise deploy ments You'll manage product development activities like scrum, reviewing the scrums, and optimizing productivity across teams engineering, data science, and fron t end You'll mentor and guide juniors and middle man agers You'll continuously improve the standards of the product, as well as the architecture, culture, and exper ience, What you ll get:Highly competitive and meaningful compensation packageOne of the best health care plans that covers not only you but also your familyA gre at teamMicro-entrepreneurial tasks and responsibi lities Career development and leadership opport unities
Posted 3 weeks ago
9.0 - 14.0 years
50 - 85 Lacs
Noida
Work from Office
About the Role We are looking for a Staff Engineer to lead the design and development of a scalable, secure, and robust data platform. You will play a key role in building data platform capabilities for data quality, metadata management, lineage tracking, and compliance across all data layers. If youre passionate about building foundational data infrastructure that accelerates innovation in healthcare, wed love to talk. A Day in the Life Architect, design, and build scalable data governance tools and frameworks. Collaborate with cross-functional teams to ensure data compliance, security, and usability. Lead initiatives around metadata management, data lineage, and data cataloging. Define and evangelize standards and best practices across data engineering teams. Own the end-to-end lifecycle of governance tooling from prototyping to production deployment. Mentor and guide junior engineers and contribute to technical leadership across the organization. Drive innovation in privacy-by-design, regulatory compliance (e.g., HIPAA), and data observability solutions. What You Need 8+ years of experience in software engineering. Strong experience building distributed systems for metadata management, data lineage, and quality tracking. Proficient in backend development (Python, Java, or Scala or Go) and familiar with RESTful API design. Expertise in modern data stacks: Kafka, Spark, Airflow, Snowflake etc. Experience with open-source data governance frameworks like Apache Atlas, Amundsen, or DataHub is a big plus. Familiarity with cloud platforms (AWS, Azure, GCP) and their native data governance offerings. Prior experience in building metadata management frameworks for scale.
Posted 3 weeks ago
6.0 - 11.0 years
19 - 27 Lacs
Haryana
Work from Office
About Company Job Description Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience
Posted 3 weeks ago
10 - 15 years
25 - 40 Lacs
Pune
Hybrid
Description BS/MS degree in Computer Science or equivalent 1015 years of experience building products on distributed systems, preferably in the Data Security domain Working knowledge of the security domain - Ransomware protection, Anomaly detection, data classification and compliance of unstructured data. Strong knowledge of Cloud platform, APIs, containers, Kubernetes, and Snowflake Knowledge of building micro-service-based applications. Hands-on development in either Golang or Python Strong development experience in Linux/Unix OS platform
Posted 4 weeks ago
5 - 10 years
35 - 50 Lacs
Bengaluru
Work from Office
Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Roles & Responsibilities: Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake. Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP). Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data. Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency. Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization. Implement data governance, security, and compliance best practices. Build and maintain data models, transformations, and data marts for analytics and reporting. Collaborate with data scientists, analysts, and business teams to define data engineering requirements. Automate infrastructure and deployments using Terraform, Airflow, or dbt. Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks. Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools. Basic Qualifications Bachelor’s or Master’s Degree in Computer Science or Data Science 5-8 years of experience in data engineering, big data processing, and cloud-based data platforms. Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks. Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization). Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine). Proficiency in SQL, Python, or Scala for data transformation and analytics. Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg). Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery). Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster. Strong understanding of data governance, access control, and encryption strategies. Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies. Preferred Qualifications Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub). Experience in BI and analytics tools (Tableau, Power BI, Looker). Familiarity with data observability tools (Monte Carlo, Great Expectations). Experience with machine learning feature engineering pipelines in Databricks. Contributions to open-source data engineering projects.
Posted 4 weeks ago
4 - 6 years
6 - 8 Lacs
Bengaluru
Work from Office
Why Join Us? Do you like to challenge yourself and find innovative solutions to problems, translating them into efficient code? Do you always enjoy learning new things? Are you ready to work on the next-generation synthesis solution technology? If the answer is yes, then come work with us! We dont need superheroes, just super minds. Key Responsibilities: The key responsibilities include owning the design, development, and optimization of sophisticated systems in C/C++ for image and signal processing. The role involves developing and implementing algorithms with a strong focus on performance, scalability, and efficiency. Additionally, the position requires leveraging machine learning techniques, particularly Convolutional Neural Networks (CNNs), for sophisticated image processing, computer vision, and related tasks. The candidate will take ownership of end-to-end software development, leading technical problem-solving efforts. A key aspect of the role is providing technical mentorship and guidance to junior engineers and peers within the team. Finally, the position involves working in an Agile environment, contributing to sprint planning, reviews, and retrospectives. Job Requirements: Technical Skills (Must have): Strong programming knowledge in C/C++. Good image/signal processing knowledge with experience in using OpenCV/Matlab. B.Tech or M.Tech in CSE/EE/ECE from a reputed engineering college. Excellent algorithm and good data-structure design skills with theoretical background in analysis of algorithms. Technical Skills (Desirable): Experience in parallel and distributed computing, with working knowledge of tools such as Sun Grid Engine, LSF, etc. Familiarity with configuration management tools such as CVS. Familiarity with Scrum, experience with defect tracking tools such as ClearQuest, JIRA. Communication Proficiency in English with strong interpersonal and excellent oral and written communication skills. Ability to collaborate as part of globally distributed team. Also, Self-motivated and able to work independently. We thrive on building a multi-functional team environment, and we look for individuals who are eager to contribute and grow with us!
Posted 1 month ago
2 - 6 years
12 - 16 Lacs
Bengaluru
Work from Office
Siemens EDA is a global technology leader in Electronic Design Automation software. Our software tools enable companies around the world to develop highly innovative electronic products faster and more efficiently. Our customers use our tools to push the boundaries of technology and physics to deliver better products in the increasingly sophisticated world of chip, board, and system design. Key Responsibilities: The key responsibilities include leading the design, development, and optimization of complex systems in C/C++ for image and signal processing. The role involves developing and implementing algorithms with a strong focus on performance, scalability, and efficiency. Additionally, the position requires leveraging machine learning techniques, particularly Convolutional Neural Networks (CNNs), for sophisticated image processing, computer vision, and related tasks. The candidate will take ownership of end-to-end software development, leading technical problem-solving efforts. A key aspect of the role is providing technical mentorship and guidance to junior engineers and peers within the team. Finally, the position involves working in an Agile environment, contributing to sprint planning, reviews, and retrospectives. Job Requirements: Technical Skills (Must Have): We require strong programming expertise in C/C++ , ensuring that you have the foundation to develop high-performance applications. A proven understanding of image/signal processing and hands-on experience with tools like OpenCV and Matlab is meaningful to help us deliver cutting-edge solutions. We seek individuals with excellent algorithm design and a proven grasp of data structures , backed by a strong theoretical background in algorithm analysis. This will empower us to tackle sophisticated problems efficiently and optimally! Technical Skills (Desirable): Experience with parallel and distributed computing is a definite plus! Familiarity with tools like Sun Grid Engine and LSF will allow you to chip in to our high-performance computing solutions. Wed love it if youre comfortable using configuration management tools like CVS , ensuring our codebase is always in top shape. Experience with Scrum methodology and defect tracking tools like Clear-Quest and JIRA will set you up for success as we strive to continuously improve our development processes. We are Siemens A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, variable pay, other benefits, pension, healthcare and actively support working from home. We are an equal opportunity employer and value diversity at our company. We do not discriminate based on race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. #li-eda #LI-HYBRID #Calibre
Posted 1 month ago
8 - 13 years
11 - 15 Lacs
Bengaluru
Work from Office
Looking for Siemens EDA ambassadors Siemens EDA is a global technology leader in Electronic Design Automation software. Our software tools enable companies around the world to develop highly innovative electronic products faster and more cost-effectively. Our customers use our tools to push the boundaries of technology and physics to deliver better products in the Increasingly complex world of chip, board, and system design. Siemens EDA"™s D2S Bangalore team, part of the Semiconductor Manufacturing Division, is seeking a Senior Engineer to join our dynamic group. In this role, you will focus on designing and implementing algorithm-centric solutions within the mask data preparation, mask process correction, and lithography systems modeling domain. You will play a Significant role in crafting the future of semiconductor manufacturing technology through innovative design, problem-solving, and continuous improvement of our products. We make real what matters. This is your role Design and implement algorithmic solutions within the mask data preparation and lithography systems modeling domains. Chip in to the continuous enhancement of Siemens EDA"™s product lines through design reviews and technical innovations. Collaborate effectively with multi-functional teams across different geographies and cultures. Engage with co-workers and collaborators to improve product quality and drive technical excellence. Provide technical consultation and drive improvements in product functionality. We don"™t need superheroes, just super minds! We bring together a dynamic team of individuals with a B.E./B.Tech./M.Tech. in Computer Science, Electrical Engineering, Electronics & Communication, Instrumentation & Control, or related fields with shown ability with 8+ years of experience Strong programming skills in C/C++ with deep expertise in object-oriented design. Solid understanding of algorithms and data structures , with a strong theoretical background in algorithm analysis. Experience with geometric data processing and computational geometry algorithms . Proficiency in distributed computing environments . Familiarity with modern software development methodologies such as Agile . Desirable Technical Skills: Experience in developing EDA applications in the post-layout domain (e.g., Mask Data Preparation, Modeling).Knowledge of model calibration tools or an understanding of the model calibration process in semiconductor manufacturing. A solid base in computational mathematics and numerical methods (including non-linear optimization). Experience in handling large layout/mask data in formats like OASIS, GDSII, MEBES, VSB .Familiarity with parallel and distributed computing tools (e.g., Sun Grid Engine, LSF ). Experience with configuration management tools such as CVS .Knowledge of Scrum methodologies and defect tracking tools like JIRA . We value individuals with a positive attitude, strong communication and presentation skills, and a dedicated, motivated approach. We seek someone who can provide technical consultation on complex issues, form relationships, and collaborate effectively as a great teammate across teams with varied strengths and cultures! We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We are Siemens A collection of over 377,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we encourage applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and creativity and help us shape tomorrow! We offer a comprehensive reward package which includes a competitive basic salary, bonus scheme, generous holiday allowance, pension, and private healthcare. Transform the everyday! #li-eda #LI-Hybrid #calibre
Posted 1 month ago
2 - 6 years
7 - 12 Lacs
Pune
Work from Office
Hello Visionary! We empower our people to stay resilient and relevant in a constantly changing world. We"™re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like you? Then it seems like you"™d make a great addition to our vibrant team. Siemens founded the new business unit Siemens Foundational Technologies (formerly known as Siemens IoT Services) on April 1, 2019 with its headquarter in Munich, Germany. It has been crafted to unlock the digital future of its clients by offering end-to-end support on their outstanding digitalization journey. Siemens Foundational Technologies is a strategic advisor and a trusted implementation partner in digital transformation and industrial IoT with a global network of more than 8000 employees in 10 countries and 21 offices. Highly skilled and experienced specialists offer services which range from consulting to craft & prototyping to solution & implementation and operation everything out of one hand. We are looking for a Senior Software Engineer You"™ll make a difference by: ExperienceAt least 5 years of experience in C++ development. C++ ExpertiseProficiency in C++11 or later versions. Programming Skills: Experience with scripting languages is a plus. Conceptual KnowledgeStrong understanding of Object-Oriented Programming (OOP) concepts and design patterns. Technical Skills: Proficiency in data structures and algorithms. Knowledge of Linux operating systems. Strong SQL query skills. Familiarity with multithreading, distributed computing, and microservices architecture. CommunicationExcellent verbal and written communication skills. Desired Skills: 5-8 years of experience is required. Great Communication skills. Analytical and problem-solving skills Join us and be yourself! We value your unique identity and perspective and are fully committed to providing equitable opportunities and building a workplace that reflects the diversity of society. Come bring your authentic self and create a better tomorrow with us. Make your mark in our exciting world at Siemens. This role is based in Pune and is an Individual contributor role. You might be required to visit other locations within India and outside. In return, you'll get the chance to work with teams impacting - and the shape of things to come. We're Siemens. A collection of over 379,000 minds building the future, one day at a time in over 200 countries. We're dedicated to equality, and we welcome applications that reflect the diversity of the communities we work in. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and imagination and help us shape tomorrow. Find out more about Siemens careers at: www.siemens.com/careers & more about mobility at https://new.siemens.com/global/en/products/mobility.html
Posted 1 month ago
4 - 8 years
25 - 30 Lacs
Pune
Hybrid
So, what’s t he r ole all about? As a Data Engineer, you will be responsible for designing, building, and maintaining large-scale data systems, as well as working with cross-functional teams to ensure efficient data processing and integration. You will leverage your knowledge of Apache Spark to create robust ETL processes, optimize data workflows, and manage high volumes of structured and unstructured data. How will you make an impact? Design, implement, and maintain data pipelines using Apache Spark for processing large datasets. Work with data engineering teams to optimize data workflows for performance and scalability. Integrate data from various sources, ensuring clean, reliable, and high-quality data for analysis. Develop and maintain data models, databases, and data lakes. Build and manage scalable ETL solutions to support business intelligence and data science initiatives. Monitor and troubleshoot data processing jobs, ensuring they run efficiently and effectively. Collaborate with data scientists, analysts, and other stakeholders to understand business needs and deliver data solutions. Implement data security best practices to protect sensitive information. Maintain a high level of data quality and ensure timely delivery of data to end-users. Continuously evaluate new technologies and frameworks to improve data engineering processes. Have you got what it takes? 4-7 years of experience as a Data Engineer, with a strong focus on Apache Spark and big data technologies. Expertise in Spark SQL , DataFrames , and RDDs for data processing and analysis. Proficient in programming languages such as Python , Scala , or Java for data engineering tasks. Hands-on experience with cloud platforms like AWS , specifically with data processing and storage services (e.g., S3 , BigQuery , Redshift , Databricks ). Experience with ETL frameworks and tools such as Apache Kafka , Airflow , or NiFi . Strong knowledge of data warehousing concepts and technologies (e.g., Redshift , Snowflake , BigQuery ). Familiarity with containerization technologies like Docker and Kubernetes . Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. Excellent communication and collaboration skills to work effectively with cross-functional teams. You will have an advantage if you also have: Knowledge of SQL and relational databases, with the ability to design and query databases effectively. Solid understanding of distributed computing, data modeling, and data architecture principles. Strong problem-solving skills and the ability to work with large and complex datasets. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7235 Reporting into: Tech Manager Role Type: Individual Contributor
Posted 1 month ago
7 - 11 years
15 - 19 Lacs
Hyderabad
Work from Office
ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Role Description: We are seeking a Data Solutions Architect to design, implement, and optimize scalable and high-performance data solutions that support enterprise analytics, AI-driven insights, and digital transformation initiatives. This role will focus on data strategy, architecture, governance, security, and operational efficiency, ensuring seamless data integration across modern cloud platforms. The ideal candidate will work closely with engineering teams, business stakeholders, and leadership to establish a future-ready data ecosystem, balancing performance, cost-efficiency, security, and usability. This position requires expertise in modern cloud-based data architectures, data engineering best practices, and Scaled Agile methodologies. Roles & Responsibilities: Design and implement scalable, modular, and future-proof data architectures that support enterprise data lakes, data warehouses, and real-time analytics. Develop enterprise-wide data frameworks that enable governed, secure, and accessible data across various business domains. Define data modeling strategies to support structured and unstructured data, ensuring efficiency, consistency, and usability across analytical platforms. Lead the development of high-performance data pipelines for batch and real-time data processing, integrating APIs, streaming sources, transactional systems, and external data platforms. Optimize query performance, indexing, caching, and storage strategies to enhance scalability, cost efficiency, and analytical capabilities. Establish data interoperability frameworks that enable seamless integration across multiple data sources and platforms. Drive data governance strategies, ensuring security, compliance, access controls, and lineage tracking are embedded into enterprise data solutions. Implement DataOps best practices, including CI/CD for data pipelines, automated monitoring, and proactive issue resolution, to improve operational efficiency. Lead Scaled Agile (SAFe) practices, facilitating Program Increment (PI) Planning, Sprint Planning, and Agile ceremonies, ensuring iterative delivery of enterprise data capabilities. Collaborate with business stakeholders, product teams, and technology leaders to align data architecture strategies with organizational goals. Act as a trusted advisor on emerging data technologies and trends, ensuring that the enterprise adopts cutting-edge data solutions that provide competitive advantage and long-term scalability. What we expect of you Must-Have Skills: Experience in data architecture, enterprise data management, and cloud-based analytics solutions. Expertise in Databricks, cloud-native data platforms, and distributed computing frameworks. Strong proficiency in modern data modeling techniques, including dimensional modeling, NoSQL, and data virtualization. Experience designing high-performance ETL/ELT pipelines and real-time data processing solutions. Deep understanding of data governance, security, metadata management, and access control frameworks. Hands-on experience with CI/CD for data solutions, DataOps automation, and infrastructure as code (IaaC). Proven ability to collaborate with cross-functional teams, including business executives, data engineers, and analytics teams, to drive successful data initiatives. Strong problem-solving, strategic thinking, and technical leadership skills. Experienced with SQL/NOSQL database, vector database for large language models Experienced with data modeling and performance tuning for both OLAP and OLTP databases Experienced with Apache Spark Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Good-to-Have Skills: Good to have deep expertise in Biotech & Pharma industries Experience with Data Mesh architectures and federated data governance models. Certification in cloud data platforms or enterprise architecture frameworks. Knowledge of AI/ML pipeline integration within enterprise data architectures. Familiarity with BI & analytics platforms for enabling self-service analytics and enterprise reporting. Education and Professional Certifications Doctorate Degree with 6-8 + years of experience in Computer Science, IT or related field OR Master’s degree with 8-10 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 10-12 + years of experience in Computer Science, IT or related field AWS Certified Data Engineer preferred Databricks Certificate preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2