Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
You are an experienced Data Architect with over 12 years of expertise in data architecture, data engineering, and enterprise-scale data solutions. Your strong background in Microsoft Fabric Data Engineering, Azure Synapse, Power BI, and Data Lake will be instrumental in driving strategic data initiatives for our organization in Hyderabad, India. In this role, you will design and implement scalable, secure, and high-performance data architecture solutions utilizing Microsoft Fabric and related Azure services. Your responsibilities will include defining data strategies aligned with business goals, architecting data pipelines and warehouses, collaborating with stakeholders to define data requirements, and providing technical leadership in data engineering best practices. Your qualifications include 12+ years of experience in data engineering or related roles, proven expertise in Microsoft Fabric and Azure Data Services, hands-on experience in modern data platform design, proficiency in SQL, Python, Spark, and Power BI, as well as strong problem-solving and communication skills. Preferred qualifications include Microsoft certifications, experience with DevOps and CI/CD for data projects, exposure to real-time streaming and IoT data, and prior Agile/Scrum environment experience. If you are passionate about driving innovation in data architecture, optimizing data performance, and leading data initiatives that align with business objectives, we encourage you to apply for this Full-Time Data Architect position in Hyderabad, India.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
durgapur, west bengal
On-site
You will be joining Pinnacle Infotech, a company that values inclusive growth in an agile and diverse environment. With over 30 years of global experience, 3,400+ experts, and 15,000+ projects completed across 43+ countries for 5,000+ clients, you will have the opportunity to work on impactful global projects. At Pinnacle Infotech, you will experience rapid career advancement, cutting-edge training, and a supportive community that celebrates uniqueness and embraces E.A.R.T.H. values. As an MLOps Engineer, your primary responsibility will be to build, deploy, and maintain the infrastructure required for machine learning models and ETL data pipelines. You will collaborate closely with data scientists and software developers to streamline machine learning operations, manage data workflows, and ensure that ML solutions are scalable, reliable, and secure. The ideal candidate for this role should have a Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field, along with at least 3 years of experience in MLOps, data engineering, or a similar role. Proficiency in programming languages such as Python, Spark, and SQL is essential, as well as experience with ML model deployment frameworks and tools like MLflow. Hands-on experience with cloud platforms (AWS, Azure, GCP), containerization (Docker), orchestration tools (Kubernetes), DevOps practices, CI/CD pipelines, and monitoring tools are also required. Key Responsibilities: - Data Engineering and Pipeline Management: Design, develop, optimize, and maintain ETL processes and data pipelines, ensuring data quality, integrity, and consistency. Collaborate with data scientists to make data available in the right format for machine learning. - ML Operations and Deployment: Design and optimize scalable ML deployment pipelines, develop CI/CD pipelines for automated model training and deployment, and implement containerization and orchestration tools for ML workflows. Monitor and troubleshoot model performance in production environments. - Infrastructure Management: Manage cloud infrastructure to support data and ML operations, optimize workflows for large-scale datasets, and set up monitoring tools for infrastructure and application performance. - Collaboration and Best Practices: Work closely with data science, software development, and product teams to optimize model performance, and develop best practices for ML lifecycle management. If you are interested in this exciting opportunity, please share your resume at sunitas@pinnacleinfotech.com. Join us at Pinnacle Infotech and drive swift career growth while working on impactful global projects!,
Posted 2 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You will be working as an Informatica BDM professional at PibyThree Consulting Pvt Ltd. in Pune, Maharashtra. PibyThree is a global cloud consulting and services provider, focusing on Cloud Transformation, Cloud FinOps, IT Automation, Application Modernization, and Data & Analytics. The company's goal is to help businesses succeed by leveraging technology for automation and increased productivity. Your responsibilities will include: - Having a minimum of 4+ years of development and design experience in Informatica Big Data Management - Demonstrating excellent SQL skills - Working hands-on with HDFS, HiveQL, BDM Informatica, Spark, HBase, Impala, and other big data technologies - Designing and developing BDM mappings in Hive mode for large volumes of INSERT/UPDATE - Creating complex ETL mappings using various transformations such as Source Qualifier, Sorter, Aggregator, Expression, Joiner, Dynamic Lookup, Lookups, Filters, Sequence, Router, and Update Strategy - Ability to debug Informatica and utilize tools like Sqoop and Kafka This is a full-time position that requires you to work in-person during day shifts. The preferred education qualification is a Bachelor's degree, and the preferred experience includes a total of 4 years of work experience with 2 years specifically in Informatica BDM.,
Posted 2 days ago
5.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have 5-12 years of experience in Big Data & Data related technologies, with expertise in distributed computing principles. Your skills should include an expert level understanding of Apache Spark and hands-on programming with Python. Proficiency in Hadoop v2, Map Reduce, HDFS, and Sqoop is required. Experience in building stream-processing systems using technologies like Apache Storm or Spark-Streaming, as well as working with messaging systems such as Kafka or RabbitMQ, will be beneficial. A good understanding of Big Data querying tools like Hive and Impala, along with integration of data from multiple sources including RDBMS, ERP, and Files, is necessary. You should possess knowledge of SQL queries, joins, stored procedures, and relational schemas. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is expected. Performance tuning of Spark Jobs and familiarity with native Cloud data services like AWS or AZURE Databricks is essential. The role requires the ability to efficiently lead a team, design and implement Big data solutions, and work as a practitioner of AGILE methodology. This position falls under the category of Data Engineer and is suitable for individuals with expertise in ML/AI Engineers, Data Scientists, and Software Engineers.,
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417810 Relocation Package Yes
Posted 2 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417809 Relocation Package Yes
Posted 2 days ago
4.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. 2) Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. 6) Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. 7) Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. 8) Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417808 Relocation Package Yes
Posted 2 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Spark Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the business environment. You will also engage in testing and troubleshooting to enhance application performance and user experience, while continuously seeking ways to improve processes and solutions. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application processes and workflows. - Engage in code reviews to ensure quality and adherence to best practices. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark. - Strong understanding of distributed computing principles. - Experience with data processing frameworks and tools. - Familiarity with programming languages such as Java or Scala. - Knowledge of cloud platforms and services for application deployment. Additional Information: - The candidate should have minimum 3 years of experience in Apache Spark. - This position is based at our Noida office. - A 15 years full time education is required., 15 years full time education
Posted 2 days ago
7.5 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform, Informatica Intelligent Cloud Services Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process while maintaining a focus on quality and efficiency. You will also engage in strategic planning to align application development with organizational goals, ensuring that all stakeholders are informed and involved throughout the project lifecycle. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate training and development opportunities for team members to enhance their skills. - Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform, Informatica Intelligent Cloud Services. - Good To Have Skills: Experience with cloud-based data integration tools. - Strong understanding of data engineering principles and practices. - Experience with big data technologies such as Apache Spark and Hadoop. - Familiarity with data governance and data quality frameworks. Additional Information: - The candidate should have minimum 7.5 years of experience in Databricks Unified Data Analytics Platform. - This position is based in Mumbai. - A 15 years full time education is required.
Posted 2 days ago
0 years
0 Lacs
India
Remote
CSQ326R35 Mission The AI Forward Deployed Engineering (AI FDE) team is a highly specialized customer-facing AI team at Databricks. We deliver professional services engagements to help our customers build and productionize first-of-its-kind AI applications. We work cross-functionally to shape long-term strategic priorities and initiatives alongside engineering, product, and developer relations, as well as support internal subject matter expert (SME) teams. We view our team as an ensemble: we look for individuals with strong, unique specializations to improve the overall strength of the team. This team is the right fit for you if you love working with customers, teammates, and fueling your curiosity for the latest trends in GenAI, LLMOps, and ML more broadly. This role can be remote. The Impact You Will Have Develop cutting-edge GenAI solutions, incorporating the latest techniques from our Mosaic AI research to solve customer problems Own production rollouts of consumer and internally facing GenAI applications Serve as a trusted technical advisor to customers across a variety of domains Present at conferences such as Data + AI Summit, recognized as a thought leader internally and externally Collaborate cross-functionally with the product and engineering teams to influence priorities and shape the product roadmap What We Look For Experience building GenAI applications, including RAG, multi-agent systems, Text2SQL, fine-tuning, etc., with tools such as HuggingFace, LangChain, and DSPy Expertise in deploying production-grade GenAI applications, including evaluation and optimizations Extensive years of hands-on industry data science experience, leveraging common machine learning and data science tools, i.e. pandas, scikit-learn, PyTorch, etc. Experience building production-grade machine learning deployments on AWS, Azure, or GCP Graduate degree in a quantitative discipline (Computer Science, Engineering, Statistics, Operations Research, etc.) or equivalent practical experience Experience communicating and/or teaching technical concepts to non-technical and technical audiences alike Passion for collaboration, life-long learning, and driving business value through AI [Preferred] Experience using the Databricks Intelligence Platform and Apache Spark™ to process large-scale distributed datasets About Databricks Databricks is the data and AI company. More than 10,000 organizations worldwide — including Comcast, Condé Nast, Grammarly, and over 50% of the Fortune 500 — rely on the Databricks Data Intelligence Platform to unify and democratize data, analytics and AI. Databricks is headquartered in San Francisco, with offices around the globe and was founded by the original creators of Lakehouse, Apache Spark™, Delta Lake and MLflow. To learn more, follow Databricks on Twitter, LinkedIn and Facebook. Benefits At Databricks, we strive to provide comprehensive benefits and perks that meet the needs of all of our employees. For specific details on the benefits offered in your region, please visit https://www.mybenefitsnow.com/databricks. Our Commitment to Diversity and Inclusion At Databricks, we are committed to fostering a diverse and inclusive culture where everyone can excel. We take great care to ensure that our hiring practices are inclusive and meet equal employment opportunity standards. Individuals looking for employment at Databricks are considered without regard to age, color, disability, ethnicity, family or marital status, gender identity or expression, language, national origin, physical and mental ability, political affiliation, race, religion, sexual orientation, socio-economic status, veteran status, and other protected characteristics. Compliance If access to export-controlled technology or source code is required for performance of job duties, it is within Employer's discretion whether to apply for a U.S. government license for such positions, and Employer may decline to proceed with an applicant on this basis alone.
Posted 3 days ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev. The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology. The teams are transforming Operations through Tech and Analytics. Do You Dream Big? We Need You. Job Description Job Title: Senior Data Scientist Location: Bangalore Reporting to: Senior Manager Analytics 1) Purpose of the role We seek a highly skilled Senior Machine Learning Engineer / Senior Data Scientist to design, develop, and deploy advanced machine learning models and systems. The ideal candidate will have deep expertise in machine learning algorithms, data processing, and model deployment, with a proven track record of delivering scalable AI solutions in production environments. This role requires strong technical leadership, collaboration with cross-functional teams, and a passion for solving complex problems. 2) Key tasks & accountabilities Model Development: Design, develop, and optimize machine learning models for various applications, including but not limited to natural language processing, computer vision, and predictive analytics. Data Pipeline Management: Build and maintain robust data pipelines for preprocessing, feature engineering, and data augmentation to support model training and evaluation. Model Deployment: Deploy machine learning models into production environments, ensuring scalability, reliability, and performance using tools like Docker, Kubernetes, or cloud platforms preferably Azure. Research and Innovation: Stay updated on the latest advancements in machine learning and AI, incorporating state-of-the-art techniques into projects to improve performance and efficiency. Collaboration: Work closely with data scientists, software engineers, product managers, and other stakeholders to translate business requirements into technical solutions. Performance Optimization: Monitor and optimize model performance, addressing issues like model drift, bias, and scalability challenges. Code Quality: Write clean, maintainable, and well-documented code, adhering to best practices for software development and version control (e.g., Git). Mentorship: Provide technical guidance and mentorship to junior engineers, fostering a culture of learning and innovation within the team. 3) Qualifications, Experience, Skills Level of educational attainment required Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, or a related field. PhD is a plus. Previous work experience 5+ years of experience in machine learning, data science, or a related field. Proven experience in designing, training, and deploying machine learning models in production. Hands-on experience with cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes). Technical Skills required Proficiency in Python and libraries/frameworks such as TensorFlow, PyTorch, Scikit-learn, or Hugging Face. Strong understanding of machine learning algorithms (e.g., regression, classification, clustering, deep learning, reinforcement learning, optimization). Experience with big data technologies (e.g., Hadoop, Spark, or similar) and data processing pipelines. Familiarity with MLOps practices, including model versioning, monitoring, and CI/CD for ML workflows. Knowledge of software engineering principles, including object-oriented programming, API development, and microservices architecture. Other Skills required Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work in a fast-paced, dynamic environment and manage multiple priorities. Experience with generative AI models or large language models (LLMs). Familiarity with distributed computing or high-performance computing environments. And above all of this, an undying love for beer! We dream big to create future with more cheers
Posted 3 days ago
5.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Senior Software Engineer Group Description Data Solutions - Platforms and Environments is the industry leading content delivery platform. Clients seamlessly access organized and connected content that is easily discoverable, explorable, and procured via the FactSet Marketplace. Data is delivered via a variety of technologies and formats that meet the needs of our clients’ workflows. By enabling our clients to utilize their preferred choice of industry standard databases, programing languages, and data visualization tools, we empower them to focus on the core competencies needed to drive their business. The Data Solutions - Platforms and Environments solutions portfolio includes Standard DataFeed, Data Exploration, OnDemand (API), Views, Cornerstone, Exchange DataFeed, Benchmark Feeds, the Open:FactSet Marketplace, DataDictionary , Navigator and other non-workstation initiatives. Job Description The Data Solutions - Platforms and Environments team is looking for a talented, highly motivated Senior Software Engineer (Full Stack Engineer) to join our Navigator Application initiatives, an important part of one of FactSet’s highest profile and most strategic areas of investment and development. As the Full Stack Senior Software Engineer , you will design and develop Application Development including UI , API , Database frameworks and data engineering pipelines, help implement improvements to existing pipelines and infrastructure and provide production support. You will be collaborating closely with Product Developer/Business Analyst for capturing technical requirements. FactSet is happy to setup an information session with an Engineer working on this product to talk about the product, team and the interview process. What You’II Do Implement new components and Application Features for Client facing application as a Full Stack Developer. Maintain and resolve bugs in existing components Contribute new features, fixes, and refactors to the existing code Perform code reviews and coach engineers with respect to best practices Work with other engineers in following the test-driven methodology in an agile environment Collaborate with other engineers and Product Developers in a Scrum Agile environment using Jira and Confluence Ability to work as part of a geographically diverse team Ability to create and review documentation and test plans Estimate task sizes and regularly communicate progress in daily standups and biweekly Scrum meetings Coordinate with other teams across offices and departments What We’re Looking For Bachelor’s degree in Engineering or relevant field required. 5 to 7 years of relevant experience Expert level proficiency writing and optimizing code in Python. Proficient in frontend technologies such as Vue.js (preferred) or ReactJS and experience with JavaScript, CSS, HTML . Good knowledge of REST API Development, preferably Python Flask, Open API Good knowledge of Relational databases, preferably with MSSQL or Postgres Good Knowledge of GenAI and Vector Databases is a plus Good understanding of general database design and architecture principles A realistic, pragmatic approach. Can deliver functional prototypes that can be enhanced & optimized in later phases Strong written and verbal communication skills Working experience on AWS services, Lambda, EC2, S3, AWS Glue etc. Strong Working experience with any container / PAAS technology (Docker or Heroku) ETL and Data pipelines experience a plus. Working experience of Apache Spark, Apache Airflow, GraphQL, is a plus Experience in developing event driven distributed serverless Infrastructure (AWS-Lambda), SNS-SQS is a plus. Must be a Voracious Learner. What's In It For You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn More About Our Benefits Here. Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn. At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.
Posted 3 days ago
4.0 - 7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-7 years of relevant work experience needed. Experience with stakeholder management will be an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.
Posted 3 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 321767BR Job Type Full Time Your role Do you have a curious mind, want to be involved in the latest technology trends and like to solve problems that have a meaningful benefit to hundreds of users across the bank? Join our Tech Services- Group Chief Technology Office team and become a core contributor for the execution of the banks global AI Strategy, particularly to help the bank deploy AI models quickly and efficiently! We are looking for an experienced Data Engineer or ML Engineer to drive the delivery of an innovative ecosystem of tools and services. In this AI focused role, you will contribute to the development of an SDK for Data Producers across the firm to build high-quality autonomous Data Products for cross-divisional consumption and Data Consumers (e.g. Data Scientists, Quantitative Analysts, Model Developers, Model Validators and AI agents) to easily discover, access data and build AI use-cases. Responsibilities include: direct interaction with product owners and internal users to identify requirements, development of technical solutions and execution develop an SDK (Software Development Kit) to automatically capture Data Product, Dataset and AI / ML model metadata. Also, leverage LLMs to generate descriptive information about assets integration and publication of metadata into UBS's AI Use-case inventory, model artifact registry and Enterprise Data Mesh data product and dataset catalogue for discovery and regulatory compliance purposes design and implementation of services that seamlessly collects runtime evidence and operational information about a data product or model and publishes it to appropriate visualization tools creation of a collection of starters/templates that accelerate the creation of new data products by leveraging a collection of the latest tools and services and providing diverse and rich experiences to the Devpod ecosystem. design and implementation of data contract and fine-grained access mechanisms to enable data consumption on a 'need to know' basis Your team You will be part of the Data Product Framework team, which is a newly established function within Group Chief Technology Office. We provide solutions to help the firm embrace Artificial Intelligence and Machine Learning. We work with the divisions and functions of the firm to provide innovative solutions that integrate with their existing platforms to provide new and enhanced capabilities. One of our current aims is to help a data scientist get a model into production in an accelerated timeframe with the appropriate controls and security. We offer a number of key capabilities: data discovery that uses AI/ML to help users find data and obtain access a secure and controlled manner, an AI Inventory that describes the models that have been built to help users build their own use cases and validate them with Model Risk Management, a containerized model development environment for a user to experiment and produce their models and a streamlined MLOps process that helps them track their experiments and promote their models. Your expertise PHD or Master’s degree in Computer Science or any related advanced quantitative discipline 5+ years industry experience with Python / Pandas, SQL / Spark, Azure fundamentals / Kubernetes and Gitlab additional experience in data engineering frameworks (Databricks / Kedro / Flyte), ML frameworks (MLFlow / DVC) and Agentic Frameworks (Langchain, Langgraph, CrewAI) is a plus ability to produce secure and clean code that is stable, scalable, operational, and well-performing. Be up to date with the latest IT standards (security, best practices). Understanding the security principles in the banking systems is a plus ability to work independently, manage individual project priorities, deadlines and deliverables willingness to quickly learn and adopt various technologies excellent English language written and verbal communication skills About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 3 days ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Key Responsibilities: Design and develop high-performance backend services using Java (18/21) and Spring Boot Build scalable and distributed data pipelines using Apache Spark Develop and maintain microservices-based architectures Work on cloud-native deployments, preferably on AWS (EC2, S3, EMR, Lambda, etc.) Optimize data processing systems for performance, scalability, and reliability Collaborate with data engineers, architects, and product managers to translate business requirements into technical solutions Ensure code quality through unit testing, integration testing, and code reviews Troubleshoot and resolve issues in production and non-production environments Required Skills and Experience: 5+ years of professional experience in software engineering Strong programming expertise in Core Java (18/21) Hands-on experience with Apache Spark and distributed data processing Proven experience with Spring Boot and RESTful API development Solid understanding of microservices architecture and patterns Proficiency in cloud platforms, especially AWS (preferred) Experience with SQL/NoSQL databases and data lake/storage systems Familiarity with CI/CD tools and containerization (Docker/Kubernetes is a plus) What We Offer: - We offer a market-leading salary along with a comprehensive benefits package to support your well-being. -Enjoy a hybrid or remote work setup that prioritizes work-life balance and personal well-being. -We invest in your career through continuous learning and internal growth opportunities. -Be part of a dynamic, inclusive, and vibrant workplace where your contributions are recognized and rewarded. -We believe in straightforward policies, open communication, and a supportive work environment where everyone thrives. About the Company: https://predigle.com/ https://www.espergroup.com/ Predigle, an EsperGroup company, focuses on building disruptive technology platforms to transform daily business operations. Predigle has expanded rapidly to offer various products and services. Predigle Intelligence (Pi) is a comprehensive portable AI platform that offers a low-code/no-code AI design solution for solving business problems.
Posted 3 days ago
4.0 - 7.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Responsible for developing, optimize, and maintaining business intelligence and data warehouse systems, ensuring secure, efficient data storage and retrieval, enabling self-service data exploration, and supporting stakeholders with insightful reporting and analysis. Grade - T5 Please note that the Job will close at 12am on Posting Close date, so please submit your application prior to the Close Date Accountabilities What your main responsibilities are: Data Pipeline - Develop and maintain scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity Data Integration - Connect offline and online data to continuously improve overall understanding of customer behavior and journeys for personalization. Data pre-processing including collecting, parsing, managing, analyzing and visualizing large sets of data Data Quality Management - Cleanse the data and improve data quality and readiness for analysis. Drive standards, define and implement/improve data governance strategies and enforce best practices to scale data analysis across platforms Data Transformation - Processes data by cleansing data and transforming them to proper storage structure for the purpose of querying and analysis using ETL and ELT process Data Enablement - Ensure data is accessible and useable to wider enterprise to enable a deeper and more timely understanding of operation. Qualifications & Specifications Masters /Bachelor’s degree in Engineering /Computer Science/ Math/ Statistics or equivalent. Strong programming skills in Python/Pyspark/SAS. Proven experience with large data sets and related technologies – Hadoop, Hive, Distributed computing systems, Spark optimization. Experience on cloud platforms (preferably Azure) and it's services Azure Data Factory (ADF), ADLS Storage, Azure DevOps. Hands-on experience on Databricks, Delta Lake, Workflows. Should have knowledge of DevOps process and tools like Docker, CI/CD, Kubernetes, Terraform, Octopus. Hands-on experience with SQL and data modeling to support the organization's data storage and analysis needs. Experience on any BI tool like Power BI (Good to have). Cloud migration experience (Good to have) Cloud and Data Engineering certification (Good to have) Working in an Agile environment 4-7 years of relevant work experience needed. Experience with stakeholder management will be an added advantage. What We Are Looking For Education: Bachelor's degree or equivalent in Computer Science, MIS, Mathematics, Statistics, or similar discipline. Master's degree or PhD preferred. Knowledge, Skills And Abilities Fluency in English Analytical Skills Accuracy & Attention to Detail Numerical Skills Planning & Organizing Skills Presentation Skills Data Modeling and Database Design ETL (Extract, Transform, Load) Skills Programming Skills FedEx was built on a philosophy that puts people first, one we take seriously. We are an equal opportunity/affirmative action employer and we are committed to a diverse, equitable, and inclusive workforce in which we enforce fair treatment, and provide growth opportunities for everyone. All qualified applicants will receive consideration for employment regardless of age, race, color, national origin, genetics, religion, gender, marital status, pregnancy (including childbirth or a related medical condition), physical or mental disability, or any other characteristic protected by applicable laws, regulations, and ordinances. Our Company FedEx is one of the world's largest express transportation companies and has consistently been selected as one of the top 10 World’s Most Admired Companies by "Fortune" magazine. Every day FedEx delivers for its customers with transportation and business solutions, serving more than 220 countries and territories around the globe. We can serve this global network due to our outstanding team of FedEx team members, who are tasked with making every FedEx experience outstanding. Our Philosophy The People-Service-Profit philosophy (P-S-P) describes the principles that govern every FedEx decision, policy, or activity. FedEx takes care of our people; they, in turn, deliver the impeccable service demanded by our customers, who reward us with the profitability necessary to secure our future. The essential element in making the People-Service-Profit philosophy such a positive force for the company is where we close the circle, and return these profits back into the business, and invest back in our people. Our success in the industry is attributed to our people. Through our P-S-P philosophy, we have a work environment that encourages team members to be innovative in delivering the highest possible quality of service to our customers. We care for their well-being, and value their contributions to the company. Our Culture Our culture is important for many reasons, and we intentionally bring it to life through our behaviors, actions, and activities in every part of the world. The FedEx culture and values have been a cornerstone of our success and growth since we began in the early 1970’s. While other companies can copy our systems, infrastructure, and processes, our culture makes us unique and is often a differentiating factor as we compete and grow in today’s global marketplace.
Posted 3 days ago
8.0 years
0 Lacs
India
Remote
Job Title: GCP Data Engineer Location: Remote (Only From India) Employment Type: Contract Long-Term Start Date: Immediate Time Zone Overlap: Must be available to work during EST hours (Canada) Dual Employment: Not permitted – must be terminated if applicable About the Role: We are looking for a highly skilled GCP Data Engineer to join our international team. The ideal candidate will have strong experience with Google Cloud Platform's data tools, particularly DataProc and BigQuery, and will be comfortable working in a remote, collaborative environment. You will play a key role in designing, building, and optimizing data pipelines and infrastructure that drive business insights. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes on GCP Leverage GCP DataProc and BigQuery to process and analyze large volumes of data Write efficient, maintainable code using Python and SQL Develop Spark-based data workflows using PySpark Collaborate with cross-functional teams in an international environment Ensure data quality, integrity, and security Participate in code reviews and optimize system performance Required Qualifications: 5–8 years of hands-on experience in Data Engineering Proven expertise in GCP DataProc and BigQuery Strong programming skills in Python and SQL Solid experience with PySpark for distributed data processing Fluent English with excellent communication skills Ability to work independently in a remote team environment Comfortable with working during Canada EST time zone overlap Optional / Nice-to-Have Skills: Experience with additional GCP tools and services Familiarity with CI/CD for data engineering workflows Exposure to data governance and data security best practices Interview Process: Technical Test (Online screening) 15-minute HR Interview Technical Interview with 1–2 rounds Please share only if you match above JD at hiring@khey-digit.com
Posted 3 days ago
200.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description Are you looking for an exciting opportunity to join a dynamic and growing team in a fast paced and challenging area? This is a unique opportunity for you to work in our team to partner with the Business to provide a comprehensive view. As Wholesale Credit Portfolio Analytics Analyst in Wholesale Credit Portfolio Analytics team, you will be responsible for creating valuable risk analytics solutions using advanced analytical frameworks and the firm's big data resources. The focus will be on leveraging data to improve the End-to-End credit risk process across the wholesale portfolio. Additionally, the role involves clearly and concisely communicating findings and insights to stakeholders. As part of Risk Management, you are at the center of keeping JPMorgan Chase strong and resilient. You help the firm grow its business in a responsible way by anticipating new and emerging risks, and using your expert judgement to solve real-world challenges that impact our company, customers and communities. Our culture in Risk Management and Compliance is all about thinking outside the box, challenging the status quo and striving to be best-in-class. Job Responsibilities Develop and maintain credit risk rating methodologies, tools, and frameworks to improve risk management processes, including counterparty rating models, exposure management, and credit approvals. Collaborate with internal model review and controls teams to ensure new methodologies are approved and compliant. Use data science techniques to derive insights and communicate findings. Contribute to generating new ideas to address both ad hoc and strategic projects. Present findings and recommendations to senior management through presentations Required Qualifications, Skills And Capabilities Relevant analytics, model/methodology development or credit risk experience Self-starter with creative problem-solving skills. Ideal candidates have experience in quantitative method development and data analysis and are comfortable discovering and communicating ideas through data. Degree in analytical field preferred (e.g., Data Science, Computer Science, Engineering, Mathematics, Statistics) Experience with modern analytic and data tools, particularly Python/Anaconda and/or R, Tensorflow and/or Keras/PyTorch, Spark, or SQL. Excellent problem solving, communications, and teamwork skills. Financial service background preferred, but not required. Desire to use modern technologies as a disruptive influence within Banking ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan’s Commercial & Investment Bank is a global leader across banking, markets, securities services and payments. Corporations, governments and institutions throughout the world entrust us with their business in more than 100 countries. The Commercial & Investment Bank provides strategic advice, raises capital, manages risk and extends liquidity in markets around the world.
Posted 3 days ago
3.0 years
0 Lacs
India
On-site
Lucidworks is leading digital transformation for some of the world's biggest retailers, financial services firms, manufacturers, and B2B commerce organizations. We believe that the core to a great digital experience starts with search and browse. Our Deep Learning technology captures user behavior and utilizes machine learning to connect people with the products, content, and information they need. Brands including American Airlines, Lenovo, Red Hat, and Cisco Systems rely on Lucidworks' suite of products to power commerce, customer service, and workplace applications that delight customers and empower employees. Lucidworks believes in the power of diversity and inclusion to help us do our best work. We are an Equal Opportunity employer and welcome talent across a full range of backgrounds, orientation, origin, and identity in an inclusive and non-discriminatory way. About the Team The technical support team leverages their extensive experience supporting large-scale Solr clusters and the Lucene/Solr ecosystem. Their day might include troubleshooting errors and attempting to fix or develop workarounds, diagnosing network and environmental issues, learning your customer's infrastructure and technologies, as well as reproducing bugs and opening Jira tickets for the engineering team. Their primary tasks are break/fix scenarios where the diagnostics quickly bring network assets back online and prevent future problems--which has a huge impact on our customers’ business. About the Role As a Search Engineer in Technical Support, you will play a critical role in helping our clients achieve success with our products. You will be responsible for assisting clients directly in resolving any technical issues they encounter, as well as answering questions about the product and feature functionality. You will work closely with internal teams such as Engineering and Customer Success to resolve a variety of issues, including product defects, performance issues, and feature requests. This role requires excellent problem-solving skills and attention to detail, strong communication abilities, and a deep understanding of search technology. Additionally, this role requires the ability to work independently and as part of a team, and being comfortable working with both technical and non-technical stakeholders. The successful candidate will demonstrate a passion for delivering an outstanding customer experience, balancing technical expertise with empathy for the customer’s needs. This role is open to candidates in India. The role expected to participate in weekend on-call rotations. Responsibilities Field incoming questions, help users configure Lucidworks Fusion and its components, and help them to understand how to use the features of the product Troubleshoot complex search issues in and around Lucene/Solr Document solutions into knowledge base articles for use by our customer base in our knowledge center Identify opportunities to provide customers with additional value through follow-on products and/or services Communicate high-value use cases and customer feedback to our Product Development and Engineering teams Collaborate across teams internally to diagnose and resolve critical issues Participating in a 24/7/365 on-call rotation, which includes weekends and holidays shifts Skills & Qualifications 3+ years of hands-on experience with Lucene/Solr or other search technologies is required BS or higher in Engineering or Computer Science is preferred 3+ years professional experience in a customer facing level 2-3 tech support role Experience with technical support CRM systems (Salesforce, Zendesk etc.) Ability to clearly communicate with customers by email and phone Proficiency with Java and one or more common scripting languages (Python, Perl, Ruby, etc.) Proficiency with Unix/Linux systems (command line navigation, file system permissions, system logs and administration, scripting, networking, etc.) Exposure to other related open source projects (Mahout, Hadoop, Tika, etc.) and commercial search technologies Enterprise Search, eCommerce, and/or Business Intelligence experience Knowledge of data science and machine learning concepts Experience with cloud computing platforms (GCP, Azure, AWS, etc.) and Kubernetes Startup experience is preferred Our Stack Apache Lucene/Solr, ZooKeeper, Spark, Pulsar, Kafka, Grafana Java, Python, Linux, Kubernetes Zendesk, Jira
Posted 3 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
0 years
0 Lacs
India
On-site
Hadoop Admin Location - Bangalore ( 1st priority) / Pune / Chennai Interview Mode - Level 1 or 2 will be F2F discussion Experience - 7+ Yrs Regular Shift - 9 AM to 6 PM JOB SUMMARY: 1) Strong expertise in Install, configure, and maintain Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Oozie, Zookeeper, etc.). 2) Monitor cluster performance and capacity; troubleshoot and resolve issues proactively. 3) Manage cluster upgrades, patching, and security updates with minimal downtime. 5) Implement and maintain data security, authorization, and authentication (Kerberos, Ranger, or Sentry). 6) Configure and manage Hadoop high availability, disaster recovery, and backup strategies. 7) Automate cluster monitoring, alerting, and performance tuning. 8) Work closely with data engineering teams to ensure smooth data pipeline operations. 9) Perform root cause analysis for recurring system issues and implement permanent fixes. 10) Develop and maintain system documentation, including runbooks and SOPs. 11) Support integration with third-party tools (Sqoop, Flume, Kafka, Airflow, etc.). 12) Participate in on-call rotation and incident management for production support.
Posted 3 days ago
0 years
0 Lacs
India
On-site
The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code. Responsibilities: Lead backend Python development for innovative healthcare technology solutions Oversee a backend team to achieve product and platform goals in the B2B HealthTech domain Design and implement scalable backend infrastructures with seamless API integration Ensure availability on immediate or short notice for efficient onboarding and project ramp-up Optimize existing backend systems based on real-time healthcare data requirements Collaborate with cross-functional teams to ensure alignment between tech and business goals Review and refine code for quality, scalability, and performance improvements Ideal Candidate: Experienced in building B2B software products using agile methodologies Strong proficiency in Python, with a deep understanding of backend system architecture Comfortable with fast-paced environments and quick onboarding cycles Strong communicator who fosters a culture of innovation, ownership, and collaboration Passionate about driving real-world healthcare impact through technology Skills Required: Primary: TypeScript, AWS, Python, RESTful APIs, Backend Architecture Additional: SQL/NoSQL databases, Docker/Kubernetes (preferred) Strongly Good to Have: Prior experience in Data Engineering , especially in healthcare or real-time analytics Familiarity with ETL pipelines , data lake/warehouse solutions , and stream processing frameworks (e.g., Apache Kafka, Spark, Airflow) Understanding of data privacy, compliance (e.g., HIPAA) , and secure data handling practices Hiring Process Profile Shortlisting Tech Interview Tech Interview Culture Fit
Posted 3 days ago
5.0 years
0 Lacs
India
On-site
About Nacre Capital: Nacre Capital is a global venture builder specialized in creating, building and growing disruptive start-ups with deep technologies that significantly impact lives. We are an international team of entrepreneurs, business leaders and experts including - pioneering scientists, renowned technologists, researchers, growth experts and thought leaders - that together develop and transform ventures into world-class disruptive market-leading companies. About the Role: We're looking for a highly skilled Cold Calling Specialist to join our dynamic team. This isn't just a cold calling role; it's a mission-critical position for a true hunter who thrives on initiating conversations and opening doors. Your primary focus will be outbound cold calling to meticulously identified prospects, with a hint of research to refine your targeting and approach. If you possess an unparalleled ability to engage, qualify, and inspire interest from scratch, and you're driven by measurable success, we want to hear from you. This is an opportunity for a top performer to be the spearhead of our sales pipeline, solely dedicated to creating new opportunities and fueling our growth. Key Responsibilities: Own the Outbound: Execute high-volume cold calling campaigns to a targeted list of prospects, consistently exceeding daily and weekly call metrics Master the Art of the Opening: Skillfully navigate gatekeepers and objections to connect directly with decision-makers and key influencers Qualify with Precision: Conduct initial discovery calls to understand prospect needs, pain points, and current solutions, effectively qualifying leads based on established criteria Spark Interest: Articulate our value proposition clearly and concisely, generating genuine interest and securing next steps, such as discovery calls or demonstrations for our sales team Strategic Research: Conduct brief, targeted research on companies and contacts prior to calls to personalize your approach and increase engagement rates CRM Excellence: Accurately log all call activities, interactions, and relevant information in our CRM system (e.g., Salesforce, HubSpot) to maintain a clean and up-to-date pipeline Iterate & Improve: Actively participate in feedback sessions, sharing insights from calls to refine scripts, objection handling techniques, and overall strategy Collaborate for Success: Work closely with the sales team to ensure seamless handoffs of qualified opportunities Requirements Proven Cold Calling Prowess: 5+ years of extensive track record of success in purely cold calling roles, consistently hitting and exceeding targets. This isn't your first rodeo; you're a seasoned pro Exceptional Communication: Impeccable verbal communication skills with a clear, confident, and persuasive phone presence. You can think on your feet and adapt your message in real-time Resilience & Grit: A high tolerance for rejection and an unwavering determination to succeed. You see "no" as an opportunity to learn and refine your approach Active Listening: The ability to genuinely listen to prospects, uncover their needs, and tailor your conversation accordingly Self-Motivated & Independent: You thrive in an autonomous environment and are driven by personal and team achievements, requiring minimal oversight Quick Learner: Ability to rapidly grasp complex product/service information and articulate it effectively to diverse audiences CRM Savvy: Proficient with CRM software for logging activities, managing contacts, and tracking progress Bonus Points: Experience in a similar industry or with selling a comparable product/service Essential Skills & Traits: Exceptional Communication: Impeccable verbal communication skills with a clear, confident, and persuasive phone presence. You can think on your feet, adapt your message in real-time, and articulate complex technological benefits in an understandable way Clear & Understandable Accent: A neutral or easily understandable accent, ideally American English or a Western European English accent, is crucial for effective communication with our diverse international client base. Benefits Impactful Role: Be the frontline of our growth, directly contributing to our success by generating high-quality leads Performance-Driven Culture: Join a team that values results, offers clear objectives, and recognizes top performance Uncapped Potential: Opportunities for significant earning potential based on your success Focus on Your Strength: Dedicate yourself entirely to what you do best - cold calling - without the distractions of full-cycle sales
Posted 3 days ago
8.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
8.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Title-Data Engineering Lead Overall Years Of Experience-8 To 10 Years Relevant Years of Experience-4+ Data Engineering Lead Data Engineering Lead is responsible for collaborating with the Data Architect to design and implement scalable data lake architecture and data pipelines Position Summary Design and implement scalable data lake architectures using Azure Data Lake services. Develop and maintain data pipelines to ingest data from various sources. Optimize data storage and retrieval processes for efficiency and performance. Ensure data security and compliance with industry standards. Collaborate with data scientists and analysts to facilitate data accessibility. Monitor and troubleshoot data pipeline issues to ensure reliability. Document data lake designs, processes, and best practices. Experience with SQL and NoSQL databases, as well as familiarity with big data file formats like Parquet and Avro. Essential Roles and Responsibilities Must Have Skills Azure Data Lake Azure Synapse Analytics Azure Data Factory Azure DataBricks Python (PySpark, Numpy etc) SQL ETL Data warehousing Azure Devops Experience in developing streaming pipeline using Azure Event Hub, Azure Stream analytics, Spark streaming Experience in integration with business intelligence tools such as Power BI Good To Have Skills Big Data technologies (e.g., Hadoop, Spark) Data security General Skills Experience with Agile and DevOps methodologies and the software development lifecycle. Proactive and responsible for deliverables Escalates dependencies and risks Works with most DevOps tools, with limited supervision Completion of assigned tasks on time and regular status reporting Should be able to train new team members Desired to have knowledge on any of the cloud solutions such as Azure or AWS with DevOps/Cloud certifications. Should be able to work with a multi culture global teams and team virtually Should be able to build strong relationship with project stakeholders EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough