Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Embark on a transformative journey as an Quant Analyst at Barclays, where you'll spearhead the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. Design analytics and modelling solutions to complex business problems using domain expertise.Collaboration with technology to specify any dependencies required for analytical solutions, such as data, development environments and tools.Development of high performing, comprehensively documented analytics and modelling solutions, demonstrating their efficacy to business users and independent validation teams.Implementation of analytics and models in accurate, stable, well-tested software and work with technology to operationalise them. Essential Skillsets Required For This Role A bachelor’s or master’s degree in computer science or related fields Strong Computer Science fundamentals. Experience or Masters degree in software development, covering the complete Software Development Life Cycle (SDLC), with a strong understanding of software design patterns. Experience or Masters degree in Python development. Experience with DevOps tools such as Git, Bitbucket, and TeamCity. - Proficiency in technical documentation. - Excellent verbal and written communication skills. Some Other Highly Valued Skills May Include Experience in a financial institution delivering analytical solutions and model implementation. Experience with Model deployment frameworks and workflows (e.g., databricks, kedro) a plus. - Experience in developing frameworks for mathematical, statistical, and machine learning models and analytics used in business decision-making. You may be assessed on essential skills relevant to succeed in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based out of Noida location. Purpose of the role To design, develop, implement, and support mathematical, statistical, and machine learning models and analytics used in business decision-making Accountabilities Design analytics and modelling solutions to complex business problems using domain expertise. Collaboration with technology to specify any dependencies required for analytical solutions, such as data, development environments and tools. Development of high performing, comprehensively documented analytics and modelling solutions, demonstrating their efficacy to business users and independent validation teams. Implementation of analytics and models in accurate, stable, well-tested software and work with technology to operationalise them. Provision of ongoing support for the continued effectiveness of analytics and modelling solutions to users. Demonstrate conformance to all Barclays Enterprise Risk Management Policies, particularly Model Risk Policy. Ensure all development activities are undertaken within the defined control environment. Analyst Expectations To perform prescribed activities in a timely manner and to a high standard consistently driving continuous improvement. Requires in-depth technical knowledge and experience in their assigned area of expertise Thorough understanding of the underlying principles and concepts within the area of expertise They lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they develop technical expertise in work area, acting as an advisor where appropriate. Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
delhi
On-site
The ideal candidate should possess extensive expertise in SQL, data modeling, ETL/ELT pipeline development, and cloud-based data platforms like Databricks or Snowflake. You will be responsible for designing scalable data models, managing reliable data workflows, and ensuring the integrity and performance of critical financial datasets. Collaboration with engineering, analytics, product, and compliance teams is a key aspect of this role. Responsibilities: - Design, implement, and maintain logical and physical data models for transactional, analytical, and reporting systems. - Develop and oversee scalable ETL/ELT pipelines to process large volumes of financial transaction data. - Optimize SQL queries, stored procedures, and data transformations for enhanced performance. - Create and manage data orchestration workflows using tools like Airflow, Dagster, or Luigi. - Architect data lakes and warehouses utilizing platforms such as Databricks, Snowflake, BigQuery, or Redshift. - Ensure adherence to data governance, security, and compliance standards (e.g., PCI-DSS, GDPR). - Work closely with data engineers, analysts, and business stakeholders to comprehend data requirements and deliver solutions. - Conduct data profiling, validation, and quality assurance to maintain clean and consistent data. - Maintain comprehensive documentation for data models, pipelines, and architecture. Required Skills & Qualifications: - Proficiency in advanced SQL, including query tuning, indexing, and performance optimization. - Experience in developing ETL/ELT workflows with tools like Spark, dbt, Talend, or Informatica. - Familiarity with data orchestration frameworks such as Airflow, Dagster, Luigi, etc. - Hands-on experience with cloud-based data platforms like Databricks, Snowflake, or similar technologies. - Deep understanding of data warehousing principles like star/snowflake schema, slowly changing dimensions, etc. - Knowledge of cloud services (AWS, GCP, or Azure) and data security best practices. - Strong analytical and problem-solving skills in high-scale environments. Preferred Qualifications: - Exposure to real-time data pipelines like Kafka, Spark Streaming. - Knowledge of data mesh or data fabric architecture paradigms. - Certifications in Snowflake, Databricks, or relevant cloud platforms. - Familiarity with Python or Scala for data engineering tasks.,
Posted 1 day ago
7.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
As a Lead Data Engineer with 7-12 years of experience, you will be an integral part of our team, contributing significantly to the design, development, and maintenance of our data infrastructure. Your primary responsibilities will revolve around creating and managing robust data architectures, ETL processes, data warehouses, and utilizing big data and cloud technologies to support our business intelligence and analytics needs. You will lead the design and implementation of data architectures that facilitate data warehousing, integration, and analytics platforms. Developing and optimizing ETL pipelines will be a key aspect of your role, ensuring efficient processing of large datasets and implementing data transformation and cleansing processes to maintain data quality. Your expertise will be crucial in building and maintaining scalable data warehouse solutions using technologies such as Snowflake, Databricks, or Redshift. Additionally, you will leverage AWS Glue and PySpark for large-scale data processing, manage data pipelines with Apache Airflow, and utilize cloud platforms like AWS, Azure, and GCP for data storage, processing, and analytics. Establishing data governance and security best practices, ensuring data integrity, accuracy, and availability, and implementing monitoring and alerting systems are vital components of your responsibilities. Collaborating closely with stakeholders, mentoring junior engineers, and leading data-related projects will also be part of your role. Furthermore, your technical skills should include proficiency in ETL tools like Informatica Power Center, Python, PySpark, SQL, RDBMS platforms, and data warehousing concepts. Soft skills such as excellent communication, leadership, problem-solving, and the ability to manage multiple projects effectively will be essential for success in this role. Preferred qualifications include experience with machine learning workflows, certification in relevant data engineering technologies, and familiarity with Agile methodologies and DevOps practices. Location: Hyderabad Employment Type: Full-time,
Posted 1 day ago
4.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
As a Big Data Architect specializing in Databricks at Codvo, a global empathy-led technology services company, your role is critical in designing sophisticated data solutions that drive business value for enterprise clients and power internal AI products. Your expertise will be instrumental in architecting scalable, high-performance data lakehouse platforms and end-to-end data pipelines, making you the go-to expert for modern data architecture in a cloud-first world. Your key responsibilities will include designing and documenting robust, end-to-end big data solutions on cloud platforms (AWS, Azure, GCP) with a focus on the Databricks Lakehouse Platform. You will provide technical guidance and oversight to data engineering teams on best practices for data ingestion, transformation, and processing using Spark. Additionally, you will design and implement effective data models and establish data governance policies for data quality, security, and compliance within the lakehouse. Evaluating and recommending appropriate data technologies, tools, and frameworks to meet project requirements and collaborating closely with various stakeholders to translate complex business requirements into tangible technical architecture will also be part of your role. Leading and building Proof of Concepts (PoCs) to validate architectural approaches and new technologies in the big data and AI space will be crucial. To excel in this role, you should have 10+ years of experience in data engineering, data warehousing, or software engineering, with at least 4+ years in a dedicated Data Architect role. Deep, hands-on expertise with Apache Spark and the Databricks platform is mandatory, including Delta Lake, Unity Catalog, and Structured Streaming. Proven experience architecting and deploying data solutions on major cloud providers, proficiency in Python or Scala, expert-level SQL skills, strong understanding of modern AI concepts, and in-depth knowledge of data warehousing concepts and modern Lakehouse patterns are essential. This position is remote and based in India with working hours from 2:30 PM to 11:30 PM. Join us at Codvo and be a part of a team that values Product innovation, mature software engineering, and core values like Respect, Fairness, Growth, Agility, and Inclusiveness each day to offer expertise, outside-the-box thinking, and measurable results.,
Posted 1 day ago
3.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language), Apache Airflow Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. - Good To Have Skills: Experience with Apache Airflow, Python (Programming Language). - Strong understanding of data integration and ETL processes. - Experience with cloud-based data solutions and architectures. - Familiarity with data governance and management best practices. Additional Information: - The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Kolkata office. - A 15 years full time education is required., 15 years full time education
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417810 Relocation Package Yes
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417809 Relocation Package Yes
Posted 1 day ago
4.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. 2) Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. 6) Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. 7) Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. 8) Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417808 Relocation Package Yes
Posted 1 day ago
7.5 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform, Informatica Intelligent Cloud Services Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process while maintaining a focus on quality and efficiency. You will also engage in strategic planning to align application development with organizational goals, ensuring that all stakeholders are informed and involved throughout the project lifecycle. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate training and development opportunities for team members to enhance their skills. - Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Informatica Intelligent Cloud Services. - Good To Have Skills: Experience with cloud-based data integration tools. - Strong understanding of data engineering principles and practices. - Experience with big data technologies such as Apache Spark and Hadoop. - Familiarity with data governance and data quality frameworks. Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Mumbai. - A 15 years full time education is required.
Posted 1 day ago
0 years
0 Lacs
India
Remote
CSQ326R35 Mission The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. This role can be remote. The Impact You Will Have Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems Own production rollouts of consumer and internally facing GenAI applications Serve as a trusted technical advisor to customers across a variety of domains Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap What We Look For Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy Expertise in deploying production-grade GenAI applications, including evaluation and optimizations Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc. Experience building production-grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through AI [Preferred] Experience using the Databricks Intelligence Platform and Apache Spark™ to process large-scale distributed datasets About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 1 day ago
4.0 - 7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-7 years of relevant work experience needed. Experience with stakeholder management will be an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 321767BR Job Type Full Time Your role Do you have a curious mind, want to be involved in the latest technology trends and like to solve problems that have a meaningful benefit to hundreds of users across the bank? Join our Tech Services- Group Chief Technology Office team and become a core contributor for the execution of the banks global AI Strategy, particularly to help the bank deploy AI models quickly and efficiently! We are looking for an experienced Data Engineer or ML Engineer to drive the delivery of an innovative ecosystem of tools and services. In this AI focused role, you will contribute to the development of an SDK for Data Producers across the firm to build high-quality autonomous Data Products for cross-divisional consumption and Data Consumers (e.g. Data Scientists, Quantitative Analysts, Model Developers, Model Validators and AI agents) to easily discover, access data and build AI use-cases. Responsibilities include: direct interaction with product owners and internal users to identify requirements, development of technical solutions and execution develop an SDK (Software Development Kit) to automatically capture Data Product, Dataset and AI / ML model metadata. Also, leverage LLMs to generate descriptive information about assets integration and publication of metadata into UBS's AI Use-case inventory, model artifact registry and Enterprise Data Mesh data product and dataset catalogue for discovery and regulatory compliance purposes design and implementation of services that seamlessly collects runtime evidence and operational information about a data product or model and publishes it to appropriate visualization tools creation of a collection of starters/templates that accelerate the creation of new data products by leveraging a collection of the latest tools and services and providing diverse and rich experiences to the Devpod ecosystem. design and implementation of data contract and fine-grained access mechanisms to enable data consumption on a 'need to know' basis Your team You will be part of the Data Product Framework team, which is a newly established function within Group Chief Technology Office. We provide solutions to help the firm embrace Artificial Intelligence and Machine Learning. We work with the divisions and functions of the firm to provide innovative solutions that integrate with their existing platforms to provide new and enhanced capabilities. One of our current aims is to help a data scientist get a model into production in an accelerated timeframe with the appropriate controls and security. We offer a number of key capabilities: data discovery that uses AI/ML to help users find data and obtain access a secure and controlled manner, an AI Inventory that describes the models that have been built to help users build their own use cases and validate them with Model Risk Management, a containerized model development environment for a user to experiment and produce their models and a streamlined MLOps process that helps them track their experiments and promote their models. Your expertise PHD or Master’s degree in Computer Science or any related advanced quantitative discipline 5+ years industry experience with Python / Pandas, SQL / Spark, Azure fundamentals / Kubernetes and Gitlab additional experience in data engineering frameworks (Databricks / Kedro / Flyte), ML frameworks (MLFlow / DVC) and Agentic Frameworks (Langchain, Langgraph, CrewAI) is a plus ability to produce secure and clean code that is stable, scalable, operational, and well-performing. Be up to date with the latest IT standards (security, best practices). Understanding the security principles in the banking systems is a plus ability to work independently, manage individual project priorities, deadlines and deliverables willingness to quickly learn and adopt various technologies excellent English language written and verbal communication skills About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 1 day ago
4.0 - 7.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-7 years of relevant work experience needed. Experience with stakeholder management will be an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.
Posted 1 day ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description You are a strategic thinker passionate about driving solutions in real estate analytics. You have found the right team. As an Associate in our Global Real Estate analytics department, you will spend each day defining, refining, and delivering key insights for our firm. You will support the department by running Alteryx workflows, designing and maintaining interactive dashboards, onboarding Genie using Databricks, writing SQL queries, and working with various data sources. Additionally, you will maintain documentation, manage SharePoint, and utilize reporting technology. Proficiency in Tableau, Alteryx, and SQL is essential for this position. Job Responsibilities Develop and maintain a robust core framework for the reporting and data visualization platform using tools such as Tableau, Alteryx, SQL and Excel. Design and develop efficient Key Performance Indicator (KPI) dashboards to support multiple business groups within Corporate Finance. Obtain feedback on dashboard iterations and incorporate feedback through continuous enhancements. Work with large datasets and various data sources to streamline automatic storytelling. Manage the dashboard data model and data intake process, ensuring the process is adequately documented and communicated. Provide effective report and application monitoring in production. Develop business understanding to provide future context for better data processing and reusability. Maintain documentation on issue corrective actions in line with best practices to ensure knowledge accessibility and continuous learning among the team. Required Qualifications, Capabilities, And Skills B.S. or M.S. in Computer Science or Engineering. 4 years of professional experience. Advanced proficiency with Tableau and Alteryx. Extensive experience in developing reporting solutions and dashboards. Proficiency in Databricks and strong SQL writing skills. Ability to quickly learn and assimilate business and technical knowledge. Ability to work within tight timelines while keeping management and key stakeholders appropriately updated. Strong organizational skills with the ability to drive and support change. Strong qualitative and quantitative analytical skills with the ability to synthesize large data sets and identify targeted, crisp messages. Excellent written and verbal communication and presentation skills. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Python + AWS/DataBricks Developer 📍 Hyderabad (Work from Office) 📅 5+ years experience | Immediate joiners preferred 🔹 Must-have Skills: Expert Python programming (3.7+) Strong AWS (EC2, S3, Lambda, Glue, CloudFormation) DataBricks platform experience ETL pipeline development SQL/NoSQL databases PySpark/Pandas proficiency 🔹 Good-to-have: AWS certifications Terraform knowledge Airflow experience Interested candidates can share profiles to shruti.pandey@codeethics.in Please mention the position you're applying for! #Hiring #ReactJS #Python #AWS #DataBricks #HyderabadJobs #TechHiring #WFO
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Your Future Evolves Here Evolent Health has a bold mission to change the health of the nation by changing the way health care is delivered. Our pursuit of this mission is the driving force that brings us to work each day. We believe in embracing new ideas, challenging ourselves and failing forward. We respect and celebrate individual talents and team wins. We have fun while working hard and Evolenteers often make a difference working in everything from scrubs to jeans. Are we growing? Absolutely and Globally. In 2021 we grew our teams by almost 50% and continue to grow even more in 2022. Are we recognized as a company you are supported by for your career and growth, and a great place to work? Definitely. Evolent Health International (Pune, India) has been certified as “Great Places to Work” in 2021. In 2020 and 2021 Evolent in the U.S. was both named Best Company for Women to Advance list by Parity.org and earned a perfect score on the Human Rights Campaign (HRC) Foundation’s Corporate Equality Index (CEI). This index is the nation's foremost benchmarking survey and report measuring corporate policies and practices related to LGBTQ+ workplace equality. We recognize employees that live our values, give back to our communities each year, and are champions for bringing our whole selves to work each day. If you’re looking for a place where your work can be personally and professionally rewarding, don’t just join a company with a mission. Join a mission with a company behind it. What You’ll Be Doing: Job Summary Design and develop BI reporting and data platforms. Creates the development of user-facing data visualization and presentation tools, including Microsoft SQL Server Reporting Services (SSRS) reports, Power BI dashboards, MicroStrategy and Excel PivotTables. Work on the development of data retrieval and data management for Evolent Health. Responsible for ensuring that the data assets of an organization are aligned with the organization in achieving its strategic goals. The architecture should cover databases, data integration and the means to get to the data. Help implement effective business analytics practices to enhance decision-making, efficiency, and performance. Assist with technology improvements to ensure continuous enhancements of the core BI platform. Data Analysis: Ability to perform complex data analysis using advanced SQL skills and Excel to support internal /external client’s data requests and queries for ad-hoc requests for business continuity and analytics. Communicate with non-technical business users to gather specific requirements for reports and BI solutions. Provide maintenance support for existing BI applications and reports Present work when requested and participate in knowledge-sharing sessions with team members. Required Qualifications 3-5 years of experience in BI/ Data Warehouse domain developing BI solutions and data analysis tasks using MSBI suites. Strong proficiency in Power BI: building reports, dashboards, DAX, and Power Query (M). Experience with Microsoft Fabric, including Lakehouse, Dataflows Gen2, and Direct Lake capabilities, Power Automate. Experience with Azure Data Services: Azure Data Factory, Azure Synapse, Azure Data Lake, or similar. Hands-on experience with SQL Server Reporting Services (SSRS) and SQL Server Integration Services (SSIS). Knowledge of Advanced SQL for data manipulation and performance tuning. Experience implementing ETL/ELT pipelines. Ability to work with both relational and cloud-based data sources. Preferred Qualifications Healthcare industry experience with exposure to authorizations/claims/eligibility and patient clinical data Experience with Python, Spark, or Databricks for data engineering or transformation. Familiarity with DevOps/GitRepo for BI, including deployment automation and CI/CD in Azure DevOps. Understanding of data governance, security models, and compliance. Experience with semantic modeling in Power BI and/or tabular models using Analysis Services. Exposure to AI and machine learning integrations within Microsoft Fabric or Azure. Experience with Power Apps, Microsoft purview Mandatory Requirements: Employees must have a high-speed broadband internet connection with a minimum speed of 50 Mbps and the ability to set up a wired connection to their home network to ensure effective remote work. These requirements may be updated as needed by the business. Evolent Health is an equal opportunity employer and considers all qualified applicants equally without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, or disability status .
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Department: Information Technology Location: APAC-India-IT Delivery Center Hyderabad Description Essential Duties and Responsibilities: Develop and maintain data pipelines using Azure native services like ADLS Gen 2, Azure Data Factory, Synapse, Spark, Python, Databricks and AWS Cloud services, Databurst Develop Datasets require for Business Analytics in Power BI and Azure Data Warehouse. Ensure software development principles, standards, and best practices are followed Maintain existing applications and provide operational support. Review and analyze user requirement and write system specifications Ensure quality design, delivery, and adherence to corporate standards. Participate in daily stand-ups, reviews, design sessions and architectural discussion. Other duties may be assigned Role expectations Essential Duties And Responsibilities Develop and maintain data pipelines using Azure native services like ADLS Gen 2, Azure Data Factory, Synapse, Spark, Python, Databricks and AWS Cloud services, Databurst Develop Datasets require for Business Analytics in Power BI and Azure Data Warehouse. Ensure software development principles, standards, and best practices are followed Maintain existing applications and provide operational support. Review and analyze user requirement and write system specifications Ensure quality design, delivery, and adherence to corporate standards. Participate in daily stand-ups, reviews, design sessions and architectural discussion. Other duties may be assigned What We're Looking For Required Qualifications and Skills: 5+yrs Experience in solution delivery for Data Analytics to get insights for various departments in Organization. 5+yrs Experience in delivering solutions using Microsoft Azure Platform or AWS Services with emphasis on data solutions and services. Extensive knowledge on writing SQL queries and experience in performance tuning queries Experience developing software architectures and key software components Proficient in one or more of the following programming languages: C#, Java, Python, Scala, and related open-source frameworks. Understanding of data services including Azure SQL Database, Data Lake, Databricks, Data Factory, Synapse Data modeling experience on Azure DW/ AWS , understanding of dimensional model , star schemas, data vaults Quick learner who is passionate about new technologies. Strong sense of ownership, customer obsession, and drive with a can-do attitude. Team player with great communication skills--listening, speaking, reading, and writing--in English BS in Computer Science, Computer Engineering, or other quantitative fields such as Statistics, Mathematics, Physics, or Engineering. Applicant Privacy Policy Review our Applicant Privacy Policy for additional information. Equal Opportunity Statement Align Technology is an equal opportunity employer. We are committed to providing equal employment opportunities in all our practices, without regard to race, color, religion, sex, national origin, ancestry, marital status, protected veteran status, age, disability, sexual orientation, gender identity or expression, or any other legally protected category. Applicants must be legally authorized to work in the country for which they are applying, and employment eligibility will be verified as a condition of hire.
Posted 1 day ago
6.0 years
0 Lacs
India
On-site
Job Description: Responsibilities: Develop and implement data models and algorithms to solve complex business problems. Utilize Databricks to manage and analyse large datasets efficiently. Collaborate with cross-functional teams to understand business requirements and deliver data-driven insights. Design and build scalable data pipelines and ETL processes. Perform data exploration, preprocessing, and feature engineering. Conduct statistical analysis and machine learning model development. Communicate findings and insights to stakeholders through data visualization and reports. Stay current with industry trends and best practices in data science and big data technologies. Requirements: Minimum 6 years of experience as a Data Scientist Required. Proven experience as a Data Scientist or similar role. Proficiency with Databricks and its ecosystem. Strong programming skills in Python, R, or Scala. Experience with big data technologies such as Apache Spark, Databricks. Knowledge of SQL and experience with relational databases. Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud). Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Bachelor's degree in Data Science, Computer Science, Statistics, or a related field (or equivalent experience). Preferred Qualifications: Advanced degree (Master's or Ph.D.) in a relevant field. Experience with machine learning frameworks (e.g., TensorFlow, PyTorch). Knowledge of data visualization tools (e.g., Tableau, Power BI). Familiarity with version control systems (e.g., Git).
Posted 1 day ago
5.0 years
0 Lacs
India
On-site
Coursera was launched in 2012 by Andrew Ng and Daphne Koller with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 183 million registered learners as of June 30, 2025 . Coursera partners with over 350 leading university and industry partners to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. Job Overview: Does architecting high quality and scalable data pipelines powering business critical applications excite you? How about working with cutting edge technologies alongside some of the brightest and most collaborative individuals in the industry? Join us, in our mission to bring the best learning to every corner of the world! We’re looking for a passionate and talented individual with a keen eye for data to join the Data Engineering team at Coursera! Data Engineering plays a crucial role in building a robust and reliable data infrastructure that enables data-driven decision-making, as well as various data analytics and machine learning initiatives within Coursera. In addition, Data Engineering today owns many external facing data products that drive revenue and boost partner and learner satisfaction. You firmly believe in Coursera's potential to make a significant impact on the world, and align with our core values: Learners first: Champion the needs, potential, and progress of learners everywhere. Play for team Coursera: Excel as an individual and win as a team. Put Coursera’s mission and results before personal goals. Maximize impact: Increase leverage by focusing on things that produce bigger results with less effort. Learn, change, and grow: Move fast, take risks, innovate, and learn quickly. Invite and offer feedback with respect, courage, and candor. Love without limits: Celebrate the diversity and dignity of every one of our employees, learners, customers, and partners. Your Responsibilities Architect scalable data models and construct high quality ETL pipelines that act as the backbone of our core data lake, with cutting edge technologies such as Airflow, DBT, Databricks, Redshift, Spark. Your work will lay the foundation for our data-driven culture. Design, build, and launch self-serve analytics products. Your creations will empower our internal and external customers, providing them with rich insights to make informed decisions. Be a technical leader for the team. Your guidance in technical and architectural designs for major team initiatives will inspire others. Help shape the future of Data Engineering at Coursera and foster a culture of continuous learning and growth. Partner with data scientists, business stakeholders, and product engineers to define, curate, and govern high-fidelity data. Develop new tools and frameworks in collaboration with other engineers. Your innovative solutions will enable our customers to understand and access data more efficiently, while adhering to high standards of governance and compliance. Work cross-functionally with product managers, engineers, and business teams to enable major product and feature launches. Your Skills 5+ years experience in data engineering with expertise in data architecture and pipelines Strong programming skills in Python Proficient with relational databases, data modeling, and SQL Experience with big data technologies (eg: Hive, Spark, Presto) Familiarity with batch and streaming architectures preferred Hands-on experience with some of: AWS, Databricks, Delta Lake, Airflow, DBT, Redshift, Datahub, Elementary Knowledgeable on data governance and compliance best practices Ability to communicate technical concepts clearly and concisely Independence and passion for innovation and learning new technologies If this opportunity interest you, you might like these courses on Coursera - Big Data Specialization Data Warehousing for Business Intelligence IBM Data Engineering Professional Certificate Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here.
Posted 1 day ago
5.0 years
0 Lacs
India
On-site
Job Title: Senior Machine Learning Engineer (Azure ML + Databricks + MLOps) Experience: 5+ years in AI/ML Engineering Employment Type: Full-Time Job Summary: We are looking for a Senior Machine Learning Engineer with strong expertise in Azure Machine Learning and Databricks to lead the development and deployment of scalable AI/ML solutions. You’ll work with cross-functional teams to design, build, and optimize machine learning pipelines that power critical business functions. Key Responsibilities: Design, build, and deploy scalable machine learning models using Azure Machine Learning (Azure ML) and Databricks . Develop and maintain end-to-end ML pipelines for training, validation, and deployment. Collaborate with data engineers and architects to structure data pipelines on Azure Data Lake , Synapse , or Delta Lake . Integrate models into production environments using Azure ML endpoints , MLflow , or REST APIs . Monitor and maintain deployed models, ensuring performance and reliability over time. Use Databricks notebooks and PySpark to process and analyze large-scale datasets. Apply MLOps principles using tools like Azure DevOps , CI/CD pipelines , and MLflow for versioning and reproducibility. Ensure compliance with data governance, security, and responsible AI practices. Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. 5+ years of hands-on experience in machine learning or data science roles. Strong proficiency in Python , and experience with libraries like Scikit-learn , XGBoost , PyTorch , or TensorFlow . Deep experience with Azure Machine Learning services (e.g., workspaces, compute clusters, pipelines). Proficient in Databricks , including Spark (PySpark), notebooks, and Delta Lake. Strong understanding of MLOps, experiment tracking, model management, and deployment automation. Experience with data engineering tools (e.g., Azure Data Factory, Azure Data Lake, Azure Synapse). Preferred Skills: Azure certifications (e.g., Azure AI Engineer Associate , Azure Data Scientist Associate ). Familiarity with Kubernetes , Docker , and container-based deployments. Experience working with structured and unstructured data (NLP, time series, image data, etc.). Knowledge of cost optimization , security best practices , and scalability on Azure. Experience with A/B testing, monitoring model drift, and real-time inference. Job Types: Full-time, Permanent Benefits: Flexible schedule Paid sick time Paid time off Provident Fund Work Location: In person
Posted 1 day ago
2.0 - 3.0 years
0 Lacs
Telangana
On-site
Role : ML Engineer (Associate / Senior) Experience : 2-3 Years (Associate) 4-5 Years (Senior) Mandatory Skill: Python/ MLOps/ Docker and Kubernetes/FastAPI or Flask/CICD/Jenkins/Spark/SQL/RDB/Cosmos/Kafka/ADLS/API/Databricks Location: Bangalore Notice Period: less than 60 Days Job Description: Other Skills: Azure /LLMOps / ADF/ETL We are seeking a talented and passionate Machine Learning Engineer to join our team and play a pivotal role in developing and deploying cutting-edge machine learning solutions. You will work closely with other engineers and data scientists to bring machine learning models from proof-of-concept to production, ensuring they deliver real-world impact and solve critical business challenges. Collaborate with data scientists, model developers , software engineers, and other stakeholders to translate business needs into technical solutions. Experience of having deployed ML models to production Create high performance real-time inferencing APIs and batch inferencing pipelines to serve ML models to stakeholders. Integrate machine learning models seamlessly into existing production systems. Continuously monitor and evaluate model performance and retrain the models automatically or periodically Streamline existing ML pipelines to increase throughput. Identify and address security vulnerabilities in existing applications proactively. Design, develop, and implement machine learning models for preferably insurance related applications. Well versed with Azure ecosystem Knowledge of NLP and Generative AI techniques. Relevant experience will be a plus. Knowledge of machine learning algorithms and libraries (e.g., TensorFlow, PyTorch) will be a plus. Stay up-to-date on the latest advancements in machine learning and contribute to ongoing innovation within the team.
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary We are seeking a skilled and detail-oriented Full Stack Developer with strong expertise in Python, Node.js, and cloud services on Azure. The ideal candidate will have hands-on experience in building scalable microservices, RESTful APIs, real-time communication with Socket.IO, and integrating enterprise-grade backend services. You will work closely with cross-functional teams to design, develop, and deploy full-stack solutions in a collaborative Agile environment. ________________________________________ Key Responsibilities Design and develop scalable and maintainable full-stack applications using Python and Node.js with Express.js. Build and consume RESTful APIs using JSON, following MVC architecture patterns. Implement backend services integrating SQL, MongoDB, and Redis. Develop microservices-based architecture supporting scalable and decoupled systems. Integrate And Optimize Cloud-based Solutions On Microsoft Azure, Including Storage Accounts, Databricks, Service Bus Queues, Function Apps, Logic Apps, Event Hubs, Key Vault, Virtual Machines (VMs) Implement real-time communication features using Socket.IO. Collaborate with DevOps and QA to deploy, monitor, and maintain applications on Azure. Use tools like Postman, Bruno, Swagger, and Jupyter for API testing and data validation. Write clean, modular, and testable code; perform code reviews and participate in sprint planning and retrospectives. ________________________________________ Required Skills & Experience Programming & Backend: Proficient in Python, Node.js, Express.js Solid understanding of REST APIs, JSON, and MVC Experience working with SQL databases (MSSQL) and NoSQL (MongoDB, Redis) Microservices architecture and containerization concepts Cloud & Infrastructure Hands-on experience with Azure services: Storage Account, Databricks, Function App, Logic App, Event Hub, Key Vault, VMs, Service Bus Familiarity with cloud deployment, monitoring, and logging Tools & Platforms VS Code, Git, GitHub, Bitbucket, JIRA Azure Portal, Postman, Bruno, WinSCP, Swagger, Databricks, Jupyter Communication Experience with Socket.IO for real-time event-driven applications Good written and verbal communication; ability to work with cross-functional teams ________________________________________ Preferred Qualifications Experience with containerization (Docker/Kubernetes) is a plus Familiarity with CI/CD pipelines Knowledge of Agile/Scrum methodologies Azure certifications (optional but preferred)
Posted 1 day ago
12.0 - 15.0 years
2 - 4 Lacs
Hyderābād
Remote
Join Amgen's Mission to Serve Patients If you feel like you’re part of something bigger, it’s because you are. At Amgen, our shared mission—to serve patients—drives all that we do. It is key to our becoming one of the world’s leading biotechnology companies. We are global collaborators who achieve together—researching, manufacturing, and delivering ever-better products that reach over 10 million patients worldwide. It’s time for a career you can be proud of. Principal IS Architect Live What you will do Let’s do this. Let’s change the world. In this vital role We are seeking a visionary and technically exceptional Principal IS Architect to lead the design and development of enterprise-wide intelligent search solutions. s a senior-level IT professional who designs and oversees the implementation of robust and scalable data and AI solutions, often utilizing the Java programming language and related technologies. This role requires a strong understanding of both data architecture principles and AI/ML concepts, along with expertise in Java development and cloud platforms You’ll lead by example—mentoring engineers, setting standards, and driving the technical vision for our next-generation search capabilities. This person will also be responsible for defining the roadmap for Products They will work closely with Development teams and act as a bridge between Product owners and Development teams to perform Proof of Concepts on provided design and technology, develop re-usable components etc. This is a senior role in the organization which along with a team of other architects will help design the future state of technology at Amgen India Design and Strategy: Responsibilities include developing and maintaining foundational architecture for data and AI initiatives, defining the technical roadmap, and translating business requirements into technical specifications Data Architecture: This involves designing and implementing data models, database designs, and ETL processes, as well as leading the design of scalable data architectures. The role also includes establishing best practices for data management and ensuring data security and compliance. AI Architecture and Implementation: Key tasks include architecting and overseeing the implementation of AI/ML frameworks and solutions, potentially with a focus on generative AI models, and defining processes for AI/ML development and MLOps. Develop end-to-end solution architectures for data-driven and AI-focused applications, ensuring alignment with business objectives and technology strategy. Lead architecture design efforts across data pipelines, machine learning models, AI applications, and analytics platforms in our Gap Data Platform area. Collaborate closely with business partners, product managers, data scientists, software engineers, and the broader Global Technology Solutions teams in vetting solution design and delivering business value. Provide technical leadership and mentoring in data engineering and AI best practices. Evaluate and recommend emerging data technologies, AI techniques, and cloud services to enhance business capabilities. Ensure the scalability, performance, and security of data and AI architectures. Establish and maintain architectural standards, including patterns and guidelines for data and AI projects. Create architecture artifacts(concept, system, data architecture) for data and AI projects/initiatives. Create and oversee architecture center of excellence for data and AI area to coach and mentor resources working in this area. Set technical direction, best practices, and coding standards for search engineering across the organization. Review designs, mentor senior and mid-level engineers, and champion architecture decisions aligned with product goals and compliance needs. Own performance, scalability, observability, and reliability of search services in production. Resolving technical problems as they arise. Providing technical guidance and mentorship to junior developers. Continually researching current and emerging technologies and proposing changes where needed. .Assessing the business impact that certain technical choices have. Providing updates to stakeholders on product development processes, costs, and budgets. Work closely with Information Technology professionals within the company to ensure hardware is available for projects and working properly Work closely with project management teams to successfully monitor progress of initiatives Current understanding of best practices regarding system security measures Positive outlook in meeting challenges and working to a high level Advanced understanding of business analysis techniques and processes Account for possible project challenges on constraints including, risks, time, resources and scope Possesses strong rapid prototyping skills and can quickly translate concepts into working code Take ownership of complex software projects from conception to deployment. Manage software delivery scope, risk, and timeline Participate to both front-end and back-end development using cloud technology. Develop innovative solution using generative AI technologies Define and implement robust software architectures on the cloud, AWS preferred Conduct code reviews to ensure code quality and alignment to best practices. Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations. Identify and resolve technical challenges effectively. Stay updated with the latest trends and advancements Work closely with product team, business team, and other key partners. Basic Qualifications: Master’s degree in computer science & engineering preferred with 12-15 years of software development experience OR, Bachelor’s degree in computer science & engineering preferred with 11-15 years of software development experience Minimum of 7 years of professional experience in technology, including at least 3 years in a data architecture and AI solution architect role. Strong expertise in cloud platforms, preferably Azure and GCP, and associated data and AI services. Proven experience in architecting and deploying scalable data solutions, including data lakes, warehouses, and streaming platforms. Working knowledge of tools/technologies like Azure Data Factory, Confluent Kafka, Spark, Databricks, BigQuery and Vertex AI. Deep understanding of AI/ML frameworks and tools such as TensorFlow, PyTorch, Spark ML, or Azure ML. Preferred Qualifications: Programming Languages: Proficiency in multiple languages (e.g., Python, Java,Data bricks, Vertex) is crucial and must Experienced with API integration, serverless, microservices architecture. Proficiency with programming languages like Python, Java, or Scala. Proficiency vAzure Data Factory, Confluent Kafka, Spark, Databricks, BigQuery and Vertex AI. Proficiency with AI/ML frameworks and tools such as TensorFlow, PyTorch, Spark ML, or Azure ML Solid understanding of data governance, security, privacy, and compliance standards. Exceptional communication, presentation, and stakeholder management skills. Experience working in agile project environments Good to Have Skills Willingness to work on AI Applications Experience with popular large language models Experience with Langchain or llamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. Thrive What you can expect of us As we work to develop treatments that take care of others, we also work to care for our teammates’ professional and personal growth and well-being. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination In our quest to serve patients above all else, Amgen is the first to imagine, and the last to doubt. Join us. careers.amgen.com Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.
Posted 1 day ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Cortex is urgently hiring for the role : ''Data Engineer'' Experience: 5 to 8 years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 10days only Key skills: Candidates Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks Role Overview We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing. This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks. Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows. Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering. Stay updated with the latest cloud technologies, big data frameworks, and industry trends. If you are interested kindly send your resume to us by just clicking '' easy apply''. This job is posted by Aishwarya.K Business HR - Day recruitment Cortex Consultants LLC (US) | Cortex Consulting Pvt Ltd (India) | Tcell (Canada) US | India | Canada
Posted 1 day ago
0 years
2 - 7 Lacs
Hyderābād
On-site
Summary About the role Bringing life-changing medicines to millions of people, Novartis sits at the intersection of cutting-edge medical science and innovative digital technology. As a global company, the resources and opportunities for growth and development are plentiful including global and local cross functional careers, a diverse learning suite of thousands of programs & an in-house marketplace for rotations & project work. With strong medicines pipeline our current transformation will not just deliver growth for our business but continue to allow us to bring innovative medicines to patients quickly. Come to work each day with an inclusive and collaborative US&I business-facing team. As an Associate Director Analytics Products in the DDIT US&I - Data, Analytics, and Insights team, you’ll have opportunities to contribute to the modernization of the data ecosystem, deliver DnA products to the countries and the core brand teams to deliver exceptional customer experience, reach twice as many patients twice as fast to help them prevail over severe diseases by leveraging data-driven insights. Purpose and Focus Areas As a key leader within the Advanced Analytics Products, you will manage the technical rollout of Analytics and AI Products for US and international markets, working in conjunction with the DnA (Data and Analytics) products team. Develop capability to create advanced analytics product/services roadmaps from concept to development to launch, encompassing technology adoption, product engineering, service design, security and compliance, and business process change. Incubate and adopt emerging (GenAI, AI, NLP) technologies and launch products/services faster with rapid prototyping & iterative methods to prove and establish value. For identified technologies, launch to enterprise scale, ensuring value is derived. Drive innovation (GenAI, AI/MLOPs, NLP) using appropriate people, processes, partners, and tools. Partner with IT Architecture to incubate, adopt emerging DnA technologies and launch products/services faster with rapid prototyping & iterative methods to prove and establish value. For identified technologies, launch to enterprise scale, ensuring value is derived. Focus and align DnA innovation efforts with the Business strategy, IT strategy, and legal/regulatory requirements. Establish and update developed innovation strategies, implementation plans, and value cases to implement emerging technologies. Manage vendor and senior stakeholder engagements About the Role Key Responsibilities: Data and Insights Management: Collaborate with process owners and analytical product users and enable them through data and actionable insights Data and Insights Management: Collaborate with process owners and analytical product users to understand their compliance priorities and enable them through data and actionable insights Oversee the creation, approval, and prioritization of analytics projects, ensuring alignment with strategic goals and resource availability. Accountable for successful roll out of the assigned portfolio of AI and Analytics Products that are incubated, established, and delivered across cross-functional business areas. Serves as point of escalation, review, and approval for key issues and decisions. Monitor the financial aspects of analytics projects, including budgeting, cost management, and financial reporting to ensure projects are delivered within budget and provide value to the organization Take decisions on the capability development, external and internal resources and capacity plans in line with business priorities and strategies and close collaboration with delivery teams. Lead the rollout of Data Science and AI products in key assigned markets at speed, pace and lower overall TCO by driving reuse, leveraging self-service & automation. Conceptualize, design, and develop data science projects and proof of concepts on prioritized use cases. Identify and develop DSAI capabilities & ecosystem partnerships in alignment with digital strategy and in support of Enterprise Architecture and Integration. Work with team members such as data engineers, data scientists, business analysts, UX designers, and developers to create actionable insights published on insights products Promote a data-driven decision-making culture through coaching, mentoring, and introducing industry best practices Ensure adherence to Security and Compliance policies and procedures. Essential Requirements Education & Qualifications University degree in computer sciences, business or similar Experience Experience in leading cross-functional teams, a product-centric approach to defining solutions, and expertise in Agile delivery are crucial. The ability to manage multiple concurrent delivery cycles while maintaining a strong foundation in analytical data life cycle management is essential. Additionally, proficiency in consulting, influencing, and persuading, along with unbossed leadership, IT governance, building high-performing teams, vendor management, and innovative analytical technologies, is highly desirable. Deployment of digital platforms and services at scale to deliver the digital strategy. Solid understanding of analytical and technical frameworks for descriptive and prescriptive analytics Strong delivery and program management skills Familiarity with AWS, Databricks, and Snowflake service offerings. Abreast of emerging technology within AI/ML space Strong collaborative interactions with customer-facing business teams. Track record delivering global solutions at scale. Ability to work and lead (a cross-functional team) in a matrix environment. Product-centric approach to defining solutions. Collaborate with business in gathering requirements, grooming product backlogs, driving delivery, and ongoing data product enhancements. Agile delivery experience managing multiple concurrent delivery cycles with sound foundation in Analytical Data life cycle management. Soft Skills - Consulting, Influencing & persuading, Unbossed Leadership, IT Governance, Building High Performing Teams, Vendor Management, Innovative & Analytical Technologies Commitment to Diversity and Inclusion: Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Accessibility and accommodation Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to diversityandincl.india@novartis.com and let us know the nature of your request and your contact information. Please include the job requisition number in your message Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Operations Business Unit CTS Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Technology Transformation Job Type Full time Employment Type Regular Shift Work No
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough