Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Senior PHP Developer based in Ahmedabad with over 4 years of experience, you will be responsible for utilizing your expertise in PHP and the Laravel Framework. Your role will involve working with MySQL, including complex query writing, indexing, and query optimization. You must possess a strong understanding of RESTful APIs, MVC architecture, and microservices. Your responsibilities will include practical experience with unit testing using PHPUnit or PestPHP, proficiency in Eloquent ORM, database migrations, and query builders. Hands-on experience with Git, CI/CD pipelines, and Docker will be essential for this role. You are expected to have a solid grasp of caching strategies such as Redis, Memcached, and performance optimization techniques. Your problem-solving and debugging skills will be crucial in this position. Familiarity with message queues like RabbitMQ, Kafka, or Redis Queue is desirable. Knowledge of AWS services, including EC2, S3, RDS, and Lambda, will be advantageous. Experience with frontend technologies like Angular and understanding of Event-Driven Architecture and WebSockets will be beneficial for this role. If you are a motivated PHP Developer with a passion for continuous learning and innovation, this opportunity will allow you to contribute to a dynamic team environment and work on challenging projects.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Software Developer at our company, you will be required to have expertise in .Net core, API based on REST, Kafka, Cloud (GCP preferred), and knowledge of databases (MongoDB preferred). You should have at least 5-8 years of experience in Software Development and a strong willingness to learn new technologies. Your responsibilities will include defining Technical Requirements, Design, and Estimation, along with developing high-quality, maintainable, and self-documenting code. Proficiency in object-oriented principles with .Net and .Net core, as well as knowledge of MongoDB, Kafka, Oracle, and PostgresQL is essential. You should also be adept at writing test units using XUnit, familiar with UML diagrams, and possess skills in code optimization techniques. Additionally, you must have experience with Test Driven Development, Agile/Scrum, Aspect Oriented Programming (AOP), Standard Web Services (SOAP, REST, RESTful, Json), JMS, messaging aspects, common design patterns, and UML modeling. Cloud experience with Azure is also desired. Familiarity with Sonar, Confluence, Jira, and Subversion will be beneficial for this role. At our company, you will have the opportunity to work on exciting projects in industries like High-Tech, communication, media, healthcare, retail, and telecom. You will collaborate with a diverse team of talented individuals in a supportive environment. We prioritize work-life balance by offering flexible schedules, work-from-home options, paid time off, and holidays. Our dedicated Learning & Development team provides various training programs to enhance your skills. We offer competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance, NPS (National Pension Scheme), health awareness programs, extended maternity leave, performance bonuses, and referral bonuses. You can enjoy fun perks such as sports events, cultural activities, food subsidies, corporate parties, and discounts at popular stores and restaurants. GlobalLogic is a leader in digital engineering, helping brands worldwide design and build innovative digital products and experiences. With expertise in experience design, complex engineering, and data, we accelerate our clients" transition into tomorrow's digital businesses. Headquartered in Silicon Valley, GlobalLogic operates globally, serving industries such as automotive, communications, financial services, healthcare, manufacturing, media, semiconductor, and technology. Operating under Hitachi, Ltd., we contribute to driving innovation through data and technology for a sustainable society with a higher quality of life.,
Posted 5 days ago
5.0 - 9.0 years
0 Lacs
maharashtra
On-site
We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the Consumer & Community Banking Team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm's business objectives. As a Software Engineer III, your key responsibilities include executing software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems. You will create secure and high-quality production code, maintain algorithms that run synchronously with appropriate systems, and produce architecture and design artifacts for complex applications while ensuring design constraints are met by software code development. In addition, you will gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets to continuously improve software applications and systems. Proactively identifying hidden problems and patterns in data and using these insights to drive improvements to coding hygiene and system architecture is also a part of your role. You will contribute to software engineering communities of practice and events that explore new and emerging technologies while adding to the team culture of diversity, equity, inclusion, and respect. Required qualifications, capabilities, and skills include hands-on practical experience in Java 11/17, Spring/Spring Boot, Kafka, Relational/Non-Relational Databases like (Oracle, Cassandra, Dynamo, Postgres), proficiency in coding in one or more languages, experience in developing, debugging, and maintaining code in a large corporate environment with modern programming languages and database querying languages, overall knowledge of the Software Development Life Cycle, solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security, and demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.). Preferred qualifications, capabilities, and skills include familiarity with modern front-end technologies and exposure to cloud technologies. Join us in this dynamic and innovative work environment where you can grow your skills and make a significant impact.,
Posted 5 days ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Location : Noida Experience : 8+ years total, 6+ years in MuleSoft Role Type: Individual Contributor (IC) Notice Period: Immediate to 21 days Start Date :Immediate Role Summary We are hiring a seasoned MuleSoft Developer with 8+ years of integration experience, including at least 4 years in building APIs and scalable enterprise integrations using the MuleSoft Anypoint Platform. You’ll be responsible for designing APIs, integrating legacy and cloud systems, implementing security policies, and contributing to CI/CD pipelines.You will work hands-on with RAML-based API design, DataWeave transformations, connectors (HTTP, SFTP, Salesforce), CI/CD (Jenkins/GitHub), and security standards (OAuth 2.0, JWT, SAML). A strong understanding of reusable modules, error handling, and MUnit test design is required. Must-Have Skills & Required Depth Skill Area Required Depth API Design & Development (RAML, Anypoint Platform, API Manager) Hands-on experience designing RAML 1.0 APIs, publishing to Exchange, and applying policies like rate-limiting and throttling using API Manager. Integration Development (HTTP, SFTP, Salesforce connectors) Built flows integrating with external systems using native MuleSoft connectors; familiar with routers, object stores, and connection settings. Data Transformation (DataWeave 2.0) Proficient in transforming JSON, XML, CSV using DataWeave; handled validations, flattening, and common functions like map, filter, groupBy. Security Implementation (OAuth 2.0, JWT) Applied OAuth2.0 and JWT policies in API Manager; understands token flows and error handling. CI/CD Integration (Jenkins, Maven, GitHub) Contributed to pipelines; familiar with build triggers, Maven configs, Git branching, and version control. Unit Testing (MUnit) Developed MUnit tests with mocks and assertions; 70%+ coverage for success and error paths. Error Handling (Exception Strategies, SMTP) Used global error handlers, exception strategies (on-error-propagate, continue), and SMTP-based alerting. Database Integration (MySQL, Oracle) Implemented DB flows using connectors; used parameterized queries and stored procedures. Messaging Integration (Anypoint MQ, RabbitMQ, SNS) Built pub/sub flows with at least one platform; understands queues, DLQs, and redelivery basics. Nice-to-Have Skills & Required Depth Skill Area Required Depth SAML (Security Protocols) Basic understanding of SAML flows and SSO integration; not expected to implement. Containerization (Docker, OpenShift) Familiar with deploying Mule runtimes in containers; orchestration knowledge not mandatory. Monitoring (ELK Stack, SEQ) Exposure to structured logging and centralized dashboards; basic familiarity with alerts. Azure Cloud Services (Service Bus, Redis, Storage) Experience working with Azure messaging and storage services in integration flows. Messaging Alternatives (Kafka, JMS) Prior experience with enterprise messaging using Kafka, JMS, or Azure SB preferred. DevOps Tools (Azure DevOps, Bitbucket) Worked with alternate CI/CD tools; collaborated with DevOps teams on release workflows.
Posted 5 days ago
1.0 - 7.0 years
0 Lacs
haryana
On-site
We are seeking a Senior Java Backend Developer with extensive knowledge in Java, Spring Boot, Microservices, and AWS to become part of an intriguing hybrid opportunity in Gurugram. As a Senior Java Backend Developer, you are expected to possess strong expertise in Core Java, Spring Boot, and Microservices. It is essential to have hands-on experience with Kafka, Docker, and AWS services such as EC2, S3, Lambda, and RDS. Additionally, you should have prior experience in developing and deploying microservices in cloud environments while maintaining a solid grasp of SOW compliance. A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field is required. This position is available as a Full-time or Contractual / Temporary role with a duration of 12 months. The work is based in Gurgaon, Haryana, and requires in-person attendance during Monday to Friday on morning shifts. The ideal candidate for this role should have at least 7 years of experience in Java, 6 years in AWS, and 1 year in SOW Compliance. The selected candidate will work in close collaboration with the employer, and for further details, you can contact the employer at +91 9990068898.,
Posted 5 days ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You will be working as an Informatica BDM professional at PibyThree Consulting Pvt Ltd. in Pune, Maharashtra. PibyThree is a global cloud consulting and services provider, focusing on Cloud Transformation, Cloud FinOps, IT Automation, Application Modernization, and Data & Analytics. The company's goal is to help businesses succeed by leveraging technology for automation and increased productivity. Your responsibilities will include: - Having a minimum of 4+ years of development and design experience in Informatica Big Data Management - Demonstrating excellent SQL skills - Working hands-on with HDFS, HiveQL, BDM Informatica, Spark, HBase, Impala, and other big data technologies - Designing and developing BDM mappings in Hive mode for large volumes of INSERT/UPDATE - Creating complex ETL mappings using various transformations such as Source Qualifier, Sorter, Aggregator, Expression, Joiner, Dynamic Lookup, Lookups, Filters, Sequence, Router, and Update Strategy - Ability to debug Informatica and utilize tools like Sqoop and Kafka This is a full-time position that requires you to work in-person during day shifts. The preferred education qualification is a Bachelor's degree, and the preferred experience includes a total of 4 years of work experience with 2 years specifically in Informatica BDM.,
Posted 5 days ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Qualifications 5+ years of software development experience Strong development skills in Java JDK 1.8 or above Java fundamentals like Exceptional handling, Serialization/Deserialization and Immutability concepts Good fundamental knowledge in Enums, Collections, Annotations, Generics, Auto boxing and Data Structure Database RDBMS/No SQL (SQL, Joins, Indexing) Multithreading (Re-entrant Lock, Fork & Join, Sync, Executor Framework) Spring Core & Spring Boot, security, transactions Hands-on experience with JMS (ActiveMQ, RabbitMQ, Kafka etc) Memory Mgmt (JVM configuration, Profiling, GC), profiling, Perf tunning, Testing, Jmeter/similar tool) Devops (CI/CD: Maven/Gradle, Jenkins, Quality plugins, Docker and containersization) Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of different type of Design patterns. Hands-on experience with any of the logging frameworks (SLF4J/LogBack/Log4j) Experience of writing Junit test cases using Mockito / Powermock frameworks. Should have practical experience with Maven/Gradle and knowledge of version control systems like Git/SVN etc. Good communication skills and ability to work with global teams to define and deliver on projects. Sound understanding/experience in software development process, test-driven development. Cloud – AWS / AZURE Experience in Microservice
Posted 5 days ago
5.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have 5-12 years of experience in Big Data & Data related technologies, with expertise in distributed computing principles. Your skills should include an expert level understanding of Apache Spark and hands-on programming with Python. Proficiency in Hadoop v2, Map Reduce, HDFS, and Sqoop is required. Experience in building stream-processing systems using technologies like Apache Storm or Spark-Streaming, as well as working with messaging systems such as Kafka or RabbitMQ, will be beneficial. A good understanding of Big Data querying tools like Hive and Impala, along with integration of data from multiple sources including RDBMS, ERP, and Files, is necessary. You should possess knowledge of SQL queries, joins, stored procedures, and relational schemas. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is expected. Performance tuning of Spark Jobs and familiarity with native Cloud data services like AWS or AZURE Databricks is essential. The role requires the ability to efficiently lead a team, design and implement Big data solutions, and work as a practitioner of AGILE methodology. This position falls under the category of Data Engineer and is suitable for individuals with expertise in ML/AI Engineers, Data Scientists, and Software Engineers.,
Posted 5 days ago
170.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Summary Strong hands on developer / devops for Credit Grading Hive in PE. This is for a #1 priority for the FFG program and a strong engineering talent is required to drive the rebuild of CreditMate legacy platform. The skillset requires is to complete overhaul and develop an inhouse solution in latest technology stack The person will be part of the team developing new CreditMate aligned with CC wide Unified UI / UX strategy. Key Responsibilities Strategy Advice future technology capabilities and architecture design considering business objectives, technology strategy, trends and regulatory requirements Awareness and understanding of the Group’s business strategy and model appropriate to the role. Business Awareness and understanding of the wider business, economic and market environment in which the Group operates. Understand and Recommend business flows and translate them to API Ecosystem Processes Responsible for executing and supervising microservices development to facilitate business capabilities and orchestrate to achieve business outcomes People & Talent Lead through example and build the appropriate culture and values. Set appropriate tone and expectations from their team and work in collaboration with risk and control partners. Ensure the provision of ongoing training and development of people, and ensure that holders of all critical functions are suitably skilled and qualified for their roles ensuring that they have effective supervision in place to mitigate any risks. Risk Management The ability to interpret the Portfolio Key Risks, identify key issues based on this information and put in place appropriate controls and measures Governance Awareness and understanding of the regulatory framework, in which the Group operates, and the regulatory requirements and expectations relevant to the rol Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters. Lead to achieve the outcomes set out in the Bank’s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment.] Serve as a Director of the Board Exercise authorities delegated by the Board of Directors and act in accordance with Articles of Association (or equivalent) Key stakeholders Product Owners, Hive Leads, Client Coverage Tech and Biz Stakeholders Qualifications Education-Computer science it btech Certifications-Java, kubernetes, Languages-Java, quarkus, spring, sql, python Skills And Experience Participates in development of multiple or large software products and estimates and monitors development costs based on functional and technical requirements. Delivery Experience as Tech Project manager and analysis skills Contrasts advantages and drawbacks of different development languages and tools. Expertise in RDBMS solutions (Oracle, PostgreSQL) & NoSQL offerings (Cassandra, MongoDB, etc) Experience in distributed technologies e.g. Kafka, Apache MQ, RabbitMQ etc. will be added advantage Strong knowledge in application integration using Web Service (SOAP/REST/GRPC) or Messaging using JMS. About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential.
Posted 5 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417810 Relocation Package Yes
Posted 5 days ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417809 Relocation Package Yes
Posted 5 days ago
4.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. 2) Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. 6) Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. 7) Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. 8) Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417808 Relocation Package Yes
Posted 5 days ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
APN Consulting, Inc. is a progressive IT staffing and services company offering innovative business solutions to improve client business outcomes. We focus on high impact technology solutions in ServiceNow, Fullstack, Cloud & Data, and AI / ML. Due to our globally expanding service offerings we are seeking top-talent to join our teams and grow with us. Role: Fullstack Developer Location: Noida Work Mode: Hybrid Work hours: 2-11 pm India hours As a Senior Software Engineer on the Enterprise Development Services team, you will play a key role in designing and developing solutions, patterns, and standards to be adopted across engineering teams. You'll serve as a standard bearer for development practices, design quality, and technical culture, contributing through reusable components, best practices, and direct mentorship (e.g., pair programming, tutorials, internal presentations). You'll also provide regular progress updates to your manager and support team-wide alignment to architectural goals. Role And Responsibilities Build and maintain enterprise-grade backend services using Java microservices and front-end applications using React JS Develop reusable components, frameworks, and libraries for adoption across product teams Work with Jenkins and other CI/CD tools to automate build, deployment, and testing pipelines Collaborate with engineering teams to ensure adherence to best practices and coding standards Provide technical support for the adoption of shared services and components Participate in the evolution of company-wide standards and software development policies Adapt to shifting priorities in a dynamic environment Debug complex issues involving APIs, performance, and systems integration Support technical enablement and knowledge sharing across the organization Mandatory Skills 4–5 years of relevant experience in software development with a focus on full-stack and cloud-native technologies (Azure or AWS) Strong backend development skills using Java microservices Experience with front-end development using React JS Experience with Docker and Kubernetes (EKS or AKS) Experience with CI/CD tools such as Jenkins and Terraform (or similar) Familiarity with debugging common web issues (HTTP, XHR, JSON, CORS, SSL, S3, etc.) Proven ability to investigate performance and memory issues Strong understanding of API design and ability to reduce complex requirements into scalable architecture Knowledge of messaging patterns and tools such as Kafka or RabbitMQ Applies software engineering best practices, including design patterns and linting Strong communication and collaboration skills in cross-functional teams Demonstrated ability to deliver in fast-paced, changing environments Preferred Skills Familiarity with Groovy programming language Experience with Scala or Ruby on Rails programming language Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience) We are committed to fostering a diverse, inclusive, and equitable workplace where individuals from all backgrounds feel valued and empowered to contribute their unique perspectives. We strongly encourage applications from candidates of all genders, races, ethnicities, abilities, and experiences to join our team and help us build a culture of belonging.
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The Applications Development Intermediate Programmer Analyst is an intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities Design, develop, and maintain microservices using Java, Spring Boot, and related technologies. Write clean, testable, and well-documented code. Participate in the full software development lifecycle, including requirements gathering, design, development, testing, and deployment. Collaborate with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality software. Implement RESTful APIs for internal and external consumption. Ensure the performance, security, and scalability of microservices. Contribute to the continuous improvement of our development processes and tools. Troubleshoot and resolve production issues. Participate in code reviews to ensure code quality and share knowledge. Stay up to date with the latest trends and technologies in Java development and microservices architecture. Qualifications Bachelor’s degree in computer science or a related field. 3-7 years of experience in Java software development. Strong proficiency in Java and related technologies. Experience with Spring Boot and the Spring ecosystem (Spring MVC, Spring Data, Spring Cloud). Experience designing and developing microservices architectures. Experience with RESTful API design and development. Experience with relational databases (e.g., MySQL, PostgreSQL) Experience with containerization technologies (e.g., Docker, Kubernetes). Experience with build tools such as Maven or Gradle. Experience with version control systems such as Git. Strong understanding of software development principles and design patterns. Excellent problem-solving and debugging skills. Strong communication and collaboration skills. Preferred Qualifications Experience with message brokers such as Kafka/MQ. Experience with CI/CD pipelines. Experience with testing frameworks such as JUnit or Mockito. Experience with monitoring and logging tools. Understanding of security best practices for microservices. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 5 days ago
3.0 - 7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a skilled and enthusiastic Java Microservices Developer with 3-7 years of experience to join our dynamic team. You will play a key role in designing, developing, and maintaining high-performance, scalable microservices using Java, Spring Boot, and related technologies. You should be passionate about building robust and efficient applications and have a strong understanding of microservices architecture. Responsibilities Design, develop, and maintain microservices using Java, Spring Boot, and related technologies. Write clean, testable, and well-documented code. Participate in the full software development lifecycle, including requirements gathering, design, development, testing, and deployment. Collaborate with cross-functional teams, including product managers, designers, and other developers, to deliver high-quality software. Implement RESTful APIs for internal and external consumption. Ensure the performance, security, and scalability of microservices. Contribute to the continuous improvement of our development processes and tools. Troubleshoot and resolve production issues. Participate in code reviews to ensure code quality and share knowledge. Stay up to date with the latest trends and technologies in Java development and microservices architecture. Qualifications Bachelor’s degree in computer science or a related field. 3-7 years of experience in Java software development. Strong proficiency in Java and related technologies. Experience with Spring Boot and the Spring ecosystem (Spring MVC, Spring Data, Spring Cloud). Experience designing and developing microservices architectures. Experience with RESTful API design and development. Experience with relational databases (e.g., Oracle, MySQL, PostgreSQL) Experience with containerization technologies (e.g., Docker, Kubernetes). Experience with build tools such as Maven or Gradle. Experience with version control systems such as Git. Strong understanding of software development principles and design patterns. Excellent problem-solving and debugging skills. Strong communication and collaboration skills. Preferred Qualifications Experience with message brokers such as Kafka/MQ. Experience with CI/CD pipelines. Experience with testing frameworks such as JUnit or Mockito. Experience with monitoring and logging tools. Understanding of security best practices for microservices. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 5 days ago
4.0 - 10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Your Impact: Adheres to design and coding best practices and standards. Sets up the development and production environments and troubleshoots performance issues. Participates in architecture and design reviews for projects that require complex technical solutions. Represents the organization in customer-facing communication pertinent to Sapient’s technical expertise on the specific platform. Not only participate in development stage and play role of hands-on developer but own your deliveries end to end from design to deployment. Mentor and develop the technical skills of other software developers. Follow and govern the engineering best practices set-up in team. Develop/design solutions with keeping NFRs like performance, scalability, accessibility, maintainability, configurability, availability, and monitoring as part of design. Own and provide point of view to measure and improve the quality metrics. Drives the performance tuning, re-design, and re-factoring for a module. Contributes to designing and implementing the build and releases process. Owns consistency and high quality in solution delivery. About the Role Qualifications: 4 to 10 years of strong development skills in .NET core framework. Should have excellent acumen in Data Structures, Algorithms, problem-solving and Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of different type of Design patterns. Sound understanding of concepts like Exceptional handling, Serialization/Deserialization, and Immutability concepts, etc. Good fundamental knowledge in Enums, Collections, Annotations, Generics, Autoboxing, etc. Experience with Multithreading, Async-await/TPL/Reactive programming and Concurrent collections. Good understanding of .NET resource management including garbage collections concepts. Experience in RDBMS or NO SQL databases and writing SQL queries (Joins, group by, aggregate functions, etc.). Skilled in database programming (stored procedures, triggers, functions) and good understanding of ADO.NET/ORM frameworks. Hands-on experience with messaging/data streaming platforms like RabbitMQ, ActiveMQ, Kafka etc. Hands-on experience with frameworks around managing application cross-cutting concerns like logging frameworks, Dependency Injection frameworks, configuration management frameworks. Experience in developing cloud applications using PaaS, SaaS or IaaS options. Experience in developing/migrating on-prem application on cloud platforms. Good understanding of automated provisioning of cloud based resources with appropriate access controls. Hands-on experience of any scripting language like powershell, python etc. Should have good understanding of code build, test, quality check and release tools like Git, MSTest, TFS, MSBuild, Jenkin/Bamboo/Octopus, cloud devops tools etc. Good communication skills and ability to work with global teams to define and deliver on projects. Hands-on experience in MicroServices architecture with good understanding of key Microservices based patterns. Hands-on experience in creating and consuming MicroServices using .NET Core APIs. Experience in security, transaction, Idempotency, log tracing, distributed caching, monitoring and containerization requirements of Micro services. Must have experience in AJAX, JQuery and at least one JavaScript framework (like Angular, React etc). Experience of writing Unit test cases using MSTest and mocking frameworks. Skilled/Experience in writing end to end automated tests using BDD f/w like specflow. Understand and experience on application monitoring tools like newrelic, ELK stack, app dynamics or cloud monitoring tools. Required Skills 4 to 10 years of strong development skills in .NET core framework. Excellent acumen in Data Structures, Algorithms, problem-solving and Logical/Analytical skills. Thorough understanding of OOPS concepts, Design principles and implementation of different type of Design patterns. Sound understanding of concepts like Exceptional handling, Serialization/Deserialization, and Immutability concepts, etc. Good fundamental knowledge in Enums, Collections, Annotations, Generics, Autoboxing, etc. Experience with Multithreading, Async-await/TPL/Reactive programming and Concurrent collections. Good understanding of .NET resource management including garbage collections concepts. Experience in RDBMS or NO SQL databases and writing SQL queries (Joins, group by, aggregate functions, etc.). Skilled in database programming (stored procedures, triggers, functions) and good understanding of ADO.NET/ORM frameworks. Hands-on experience with messaging/data streaming platforms like RabbitMQ, ActiveMQ, Kafka etc. Hands-on experience with frameworks around managing application cross-cutting concerns like logging frameworks, Dependency Injection frameworks, configuration management frameworks. Experience in developing cloud applications using PaaS, SaaS or IaaS options. Experience in developing/migrating on-prem application on cloud platforms. Good understanding of automated provisioning of cloud based resources with appropriate access controls. Hands-on experience of any scripting language like powershell, python etc. Good understanding of code build, test, quality check and release tools like Git, MSTest, TFS, MSBuild, Jenkin/Bamboo/Octopus, cloud devops tools etc. Good communication skills and ability to work with global teams to define and deliver on projects. Hands-on experience in MicroServices architecture with good understanding of key Microservices based patterns. Hands-on experience in creating and consuming MicroServices using .NET Core APIs. Experience in security, transaction, Idempotency, log tracing, distributed caching, monitoring and containerization requirements of Micro services. Must have experience in AJAX, JQuery and at least one JavaScript framework (like Angular, React etc). Experience of writing Unit test cases using MSTest and mocking frameworks. Skilled/Experience in writing end to end automated tests using BDD f/w like specflow Understand and experience on application monitoring tools like newrelic, ELK stack, app dynamics or cloud monitoring tools.
Posted 5 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Who we are Mindtickle is the market-leading revenue productivity platform that combines on-the-job learning and deal execution to get more revenue per rep. Mindtickle is recognized as a market leader by top industry analysts and is ranked by G2 as the #1 sales onboarding and training product. We’re honoured to be recognized as a Leader in the first-ever Forrester Wave™: Revenue Enablement Platforms, Q3 2024! Job Snapshot As the Engineering Manager / Technical Product Owner- II (TPO- II) for the Core Components (CoCo) team at Mindtickle, you’ll lead a team that owns foundational services powering every product line across the company. These include mission-critical platforms like Notifications, User Management, Authentication, Rule Automation, LLM Gateway, and more. You’ll bring structure, clarity, and focus to a team managing a wide variety of systems - many of which are legacy, high-impact, and cross-cutting in nature. You’ll work closely with engineering, product, and infrastructure leaders to bring consistency, predictability, and operational maturity to a domain that acts as the “spine” of Mindtickle’s platform. Why this role matters Mindtickle’s platform strength depends on how well our core systems scale and evolve. From onboarding new users to triggering LLM-based workflows, the CoCo team ensures reliability, speed, and extensibility. Your leadership will help transform a resilient but stretched team into a focused, high-impact engine of platform excellence. Key Problem Areas Modernizing Foundational Systems: You’ll get to shape and evolve the core services that power every part of Mindtickle’s platform—from Notifications and User Management to Rule Automation and beyond. Many of these systems have been around for years, offering both the challenge and opportunity to streamline, consolidate, and upgrade. Creating Focus from Breadth: The CoCo team owns a wide range of services. You’ll help bring clarity and structure by introducing ownership models, SME (Subject matter expert) roles, and documentation practices, ensuring depth in expertise while maintaining agility. Driving Stability and Scalability: These services need to work with high uptime. You’ll lead initiatives that improve reliability, reduce alert fatigue, and harden operational processes so that our platform remains trusted and responsive at scale. Bringing Clarity to Cross-Team Interfaces: The team often acts as a glue layer across Mindtickle. You’ll help define clear ownership boundaries, smoothen collaboration with other teams, and ensure there’s no ambiguity about who owns what. Enabling Scalable Growth: As the company scales, so does the complexity of our systems. You’ll play a key role in helping the team move from reactive firefighting to proactive planning, creating systems and rituals that support long-term velocity and team health. What’s in it for you? Drive sprint planning, backlog grooming, and delivery tracking with technical leads and PMs. Shape and enforce clear ownership boundaries for CoCo systems; lead service consolidation and scope redefinition. Champion creation of runbooks, escalation paths, system dependency maps, and KT sessions to raise team baseline. Manage stakeholder expectations and actively resolve blockers across product, infra, and adjacent platform teams. Lead the team through incidents, root cause analysis, and preventive action to ensure high system availability. Foster a psychologically safe, feedback-driven team culture that values learning, visibility, and impact. Provide strong technical judgment during prioritization discussions, planning trade-offs, and roadmap alignment. What we are looking for 8+ years of total experience, with at least 2–3 years leading or owning backend/platform teams or core services. Strong technical fundamentals in building and maintaining high-scale distributed systems (notifications, auth, infra integrations, etc.). Proven experience driving clarity in ambiguous problem spaces and upholding system boundaries across teams. Strong project management and execution discipline—can set up and run sprints, manage dependencies, and measure delivery health. Excellent collaboration and stakeholder management skills—comfortable driving cross-functional discussions and tradeoffs. Hands-on experience or working knowledge of technologies like Postgres, Kafka, Redis, Python/ Node.js microservices, or similar. Bonus: Familiarity with event-driven systems, on-call rotations, and developer experience in platform teams.
Posted 5 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Reference # 321168BR Job Type Full Time Your role Design, develop, and improve the digital products and technology services we provide to our clients Apply a broad range of software engineering practices, from analyzing user needs and developing new features to automated testing and deployment Ensure the quality, security, reliability, and compliance of our solutions by applying our digital principles and implementing both functional and non-functional requirements Build observability into our solutions, remediate the root cause of risks and issues Understand, represent, and advocate for client needs Share knowledge and expertise with colleagues , help with hiring, and contribute regularly to our engineering culture and internal communities. Your team You'll be working in the Serviceworks team in Pune. The team is currently working on delivering the next generation platform for Wealth Management in Americas. The team works in the Agile development model and using the latest technology stack. The team consists of Full stack developers, Business analysts and Dev Ops and work closely with the relevant stakeholders. Diversity helps us grow, together. That’s why we are committed to fostering and advancing diversity, equity, and inclusion. It strengthens our business and brings value to our clients Your expertise 8+ years of experience on fullstack development using in Java, Spring-boot, JPA and React Have excellent understanding and hands-on experience of Core Java, Spring, Spring Boot and Microservices and UI Must have a good understanding of Cloud Native microservice architecture, Kubernetes, database concept, Kafka messaging system, Cloud Fundamentals and Gitlab Hands-on experience in React JS Know-how of CI/CD pipeline integration (e.g., gitlab, azure devops) Experience with Relational Database management system such as MSSQL, Oracle About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
Senior Developer - Experience 6-10 Years Technical Lead - Experience 8+ Years Architect - Experience 10+ Years Location - Remote Key Responsibilities: Lead the design and implementation of cloud-native microservices using Python FastAPI, Pydantic, and Async I/O. Architect, build, and optimize APIs, worker services, and event-driven systems leveraging Confluent Kafka. Define and enforce coding standards, testing strategies, and development best practices. Implement CI/CD pipelines using GitHub Actions or other tools and manage secure and scalable deployments. Work with Docker, Terraform, and GCP infrastructure services including Cloud Run, Pub/Sub, Secret Manager, Artifact Registry, and Eventarc. Guide the integration of monitoring and observability tools such as New Relic, Cloud Logging, and Cloud Monitoring. Drive initiatives around performance tuning, caching (Redis), and data transformation including XSLT, XML/XSD processing. Support version control and code collaboration using Git/GitHub. Mentor team members, conduct code reviews, and ensure quality through unit testing frameworks like Pytest or unittest. Collaborate with stakeholders to translate business needs into scalable and maintainable solutions. Mandatory Skills: Programming & Frameworks: Expert in Python and experienced with FastAPI or equivalent web frameworks. Strong knowledge of Async I/O, Pydantic Settings Hands-on with Pytest or unittest Experience with Docker, Terraform, and Kafka (Confluent) Version Control & DevOps: Experience with any version control Proven CI/CD pipeline implementation experience Cloud & Infrastructure: Hands-on experience with any cloud provider Data Processing: Knowledge of XSLT transformations, XML/XSD processing Monitoring & Observability: Familiar with integrating monitoring/logging solutions, New relic preferred. Databases & Storage: Experience with any database/storage solution Understanding of caching mechanisms
Posted 5 days ago
3.0 years
0 Lacs
India
On-site
Lucidworks is leading digital transformation for some of the world's biggest retailers, financial services firms, manufacturers, and B2B commerce organizations. We believe that the core to a great digital experience starts with search and browse. Our Deep Learning technology captures user behavior and utilizes machine learning to connect people with the products, content, and information they need. Brands including American Airlines, Lenovo, Red Hat, and Cisco Systems rely on Lucidworks' suite of products to power commerce, customer service, and workplace applications that delight customers and empower employees. Lucidworks believes in the power of diversity and inclusion to help us do our best work. We are an Equal Opportunity employer and welcome talent across a full range of backgrounds, orientation, origin, and identity in an inclusive and non-discriminatory way. About the Team The technical support team leverages their extensive experience supporting large-scale Solr clusters and the Lucene/Solr ecosystem. Their day might include troubleshooting errors and attempting to fix or develop workarounds, diagnosing network and environmental issues, learning your customer's infrastructure and technologies, as well as reproducing bugs and opening Jira tickets for the engineering team. Their primary tasks are break/fix scenarios where the diagnostics quickly bring network assets back online and prevent future problems--which has a huge impact on our customers’ business. About the Role As a Search Engineer in Technical Support, you will play a critical role in helping our clients achieve success with our products. You will be responsible for assisting clients directly in resolving any technical issues they encounter, as well as answering questions about the product and feature functionality. You will work closely with internal teams such as Engineering and Customer Success to resolve a variety of issues, including product defects, performance issues, and feature requests. This role requires excellent problem-solving skills and attention to detail, strong communication abilities, and a deep understanding of search technology. Additionally, this role requires the ability to work independently and as part of a team, and being comfortable working with both technical and non-technical stakeholders. The successful candidate will demonstrate a passion for delivering an outstanding customer experience, balancing technical expertise with empathy for the customer’s needs. This role is open to candidates in India. The role expected to participate in weekend on-call rotations. Responsibilities Field incoming questions, help users configure Lucidworks Fusion and its components, and help them to understand how to use the features of the product Troubleshoot complex search issues in and around Lucene/Solr Document solutions into knowledge base articles for use by our customer base in our knowledge center Identify opportunities to provide customers with additional value through follow-on products and/or services Communicate high-value use cases and customer feedback to our Product Development and Engineering teams Collaborate across teams internally to diagnose and resolve critical issues Participating in a 24/7/365 on-call rotation, which includes weekends and holidays shifts Skills & Qualifications 3+ years of hands-on experience with Lucene/Solr or other search technologies is required BS or higher in Engineering or Computer Science is preferred 3+ years professional experience in a customer facing level 2-3 tech support role Experience with technical support CRM systems (Salesforce, Zendesk etc.) Ability to clearly communicate with customers by email and phone Proficiency with Java and one or more common scripting languages (Python, Perl, Ruby, etc.) Proficiency with Unix/Linux systems (command line navigation, file system permissions, system logs and administration, scripting, networking, etc.) Exposure to other related open source projects (Mahout, Hadoop, Tika, etc.) and commercial search technologies Enterprise Search, eCommerce, and/or Business Intelligence experience Knowledge of data science and machine learning concepts Experience with cloud computing platforms (GCP, Azure, AWS, etc.) and Kubernetes Startup experience is preferred Our Stack Apache Lucene/Solr, ZooKeeper, Spark, Pulsar, Kafka, Grafana Java, Python, Linux, Kubernetes Zendesk, Jira
Posted 5 days ago
0 years
0 Lacs
India
On-site
Hadoop Admin Location - Bangalore ( 1st priority) / Pune / Chennai Interview Mode - Level 1 or 2 will be F2F discussion Experience - 7+ Yrs Regular Shift - 9 AM to 6 PM JOB SUMMARY: 1) Strong expertise in Install, configure, and maintain Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Oozie, Zookeeper, etc.). 2) Monitor cluster performance and capacity; troubleshoot and resolve issues proactively. 3) Manage cluster upgrades, patching, and security updates with minimal downtime. 5) Implement and maintain data security, authorization, and authentication (Kerberos, Ranger, or Sentry). 6) Configure and manage Hadoop high availability, disaster recovery, and backup strategies. 7) Automate cluster monitoring, alerting, and performance tuning. 8) Work closely with data engineering teams to ensure smooth data pipeline operations. 9) Perform root cause analysis for recurring system issues and implement permanent fixes. 10) Develop and maintain system documentation, including runbooks and SOPs. 11) Support integration with third-party tools (Sqoop, Flume, Kafka, Airflow, etc.). 12) Participate in on-call rotation and incident management for production support.
Posted 5 days ago
0 years
0 Lacs
India
On-site
The ideal candidate will be responsible for developing high-quality applications. They will also be responsible for designing and implementing testable and scalable code. Responsibilities: Lead backend Python development for innovative healthcare technology solutions Oversee a backend team to achieve product and platform goals in the B2B HealthTech domain Design and implement scalable backend infrastructures with seamless API integration Ensure availability on immediate or short notice for efficient onboarding and project ramp-up Optimize existing backend systems based on real-time healthcare data requirements Collaborate with cross-functional teams to ensure alignment between tech and business goals Review and refine code for quality, scalability, and performance improvements Ideal Candidate: Experienced in building B2B software products using agile methodologies Strong proficiency in Python, with a deep understanding of backend system architecture Comfortable with fast-paced environments and quick onboarding cycles Strong communicator who fosters a culture of innovation, ownership, and collaboration Passionate about driving real-world healthcare impact through technology Skills Required: Primary: TypeScript, AWS, Python, RESTful APIs, Backend Architecture Additional: SQL/NoSQL databases, Docker/Kubernetes (preferred) Strongly Good to Have: Prior experience in Data Engineering , especially in healthcare or real-time analytics Familiarity with ETL pipelines , data lake/warehouse solutions , and stream processing frameworks (e.g., Apache Kafka, Spark, Airflow) Understanding of data privacy, compliance (e.g., HIPAA) , and secure data handling practices Hiring Process Profile Shortlisting Tech Interview Tech Interview Culture Fit
Posted 5 days ago
4.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
Experience : 4.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Ahmedabad) Placement Type : Full time Permanent Position (*Note: This is a requirement for one of Uplers' client - Attri) What do you need for this opportunity? Must have skills required: Azure, Docker, TensorFlow, Python, Shell Scripting Attri is Looking for: About Attri Attri is an AI organization that helps businesses initiate and accelerate their AI efforts. We offer the industry’s first end-to-end enterprise machine learning platform, empowering teams to focus on ML development rather than infrastructure. From ideation to execution, our global team of AI experts supports organizations in building scalable, state-of-the-art ML solutions. Our mission is to redefine businesses by harnessing cutting-edge technology and a unique, value-driven approach. With team members across continents, we celebrate diversity, curiosity, and innovation. We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities): Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure. Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation. Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM. Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS. Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline. Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems. Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch. Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs. Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs. Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control. Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink. Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs. Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances. Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration). Participate in incident response, performance tuning, and continuous system improvement. Good to Have: Hands-on experience with ML lifecycle tools like MLflow and Kubeflow Previous involvement in production-grade AI/ML projects or data-intensive systems Startup or high-growth tech company experience Qualifications: Bachelor’s degree in Computer Science, Information Technology, or a related field. 5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role. Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling. Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits: Competitive Salary 💸 Support for continual learning (free books and online courses) 📚 Leveling Up Opportunities 🌱 Diverse team environment 🌍 How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 6 days ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Security Engineering team at Uber is focused on making the production and corporate environments secure by default to provide industry-leading solutions for Uber's production services and infrastructure. As a Senior Software Engineer in the Enterprise Application Security team, you will leverage your solid software engineering background in building solutions to ensure the protection of enterprise applications from evolving cyber threats. You will be responsible for designing, implementing, and maintaining advanced security solutions to strengthen Uber's security posture by securing various enterprise applications. What The Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Design and implement secure architectures for enterprise applications, ensuring industry best practices. Hands-on coding and code reviews Build, deploy, configure, and manage a variety of enterprise security solutions, including the Security Posture Management platform. Monitor, analyze, and remediate security risks across enterprise applications. Provide technical and engineering support to security and IT teams performing regular security assessments. Basic Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related field. 6+ years of experience in software engineering Strong hands-on technical skills, including proficiency in one or more programming languages (Go, Java, Python, C/C++) and code reviews Deep understanding of software engineering fundamentals, including algorithms, data structures, system design, and architecture. Excellent analytical and problem-solving skills, with the ability to assess security risks and implement effective solutions. Passion for innovation Preferred Qualifications Knowledge of cybersecurity concepts, tools, and best practices. Certifications in Security is a plus Experience with AI/ML technologies / frameworks and incorporating them into production systems Experience with SQL, Kafka and databases (including Spark, Hive, SQL, No-SQL etc) Experience with modern development practices (e.g., CI/CD, microservices, cloud platforms like AWS/GCP). Experience in building out integrations with open-source and vendor products Experience with automation and scripting (e.g., Python, Bash) for security operations
Posted 6 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
At Seismic, we're proud of our engineering culture where technical excellence and innovation drive everything we do. We're a remote-first data engineering team responsible for the critical data pipeline that powers insights for over 2,300 customers worldwide. Our team manages all data ingestion processes, leveraging technologies like Apache Kafka, Spark, various C# microservices services, and a shift-left data mesh architecture to transform diverse data streams into the valuable reporting models that our customers rely on daily to make data-driven decisions. Additionally, we're evolving our analytics platform to include AI-powered agentic workflows. Who You Are Have working knowledge of one OO language, preferably C#, but won’t hold your Java expertise against you (you’re the type of person who’s interested in learning and becoming an expert at new things). Additionally, we’ve been using Python more and more, and bonus points if you’re familiar with Scala. Have experience with architecturally complex distributed systems. Highly focused on operational excellence and quality – you have a passion to write clean and well tested code and believe in the testing pyramid. Outstanding verbal and written communication skills with the ability to work with others at all levels, effective at working with geographically remote and culturally diverse teams. You enjoy solving challenging problems, all while having a blast with equally passionate team members. Conversant in AI engineering. You’ve been experimenting with building ai solutions/integrations using LLMs, prompts, Copilots, Agentic ReAct workflows, etc. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Collaborating with experienced software engineers, data scientists and product managers to rapidly build, test, and deploy code to create innovative solutions and add value to our customers' experience. Building large scale platform infrastructure and REST APIs serving machine learning driven content recommendations to Seismic products. Leveraging the power of context in third-party applications such as CRMs to drive machine learning algorithms and models. Helping build next-gen Agentic tooling for reporting and insights Processing large amounts of internal and external system data for analytics, caching, modeling and more. Identifying performance bottlenecks and implementing solutions for them. Participating in code reviews, system design reviews, agile ceremonies, bug triage and on-call rotations. BS or MS in Computer Science, similar technical field of study, or equivalent practical experience. 3+ years of software development experience within a SaaS business. Must have a familiarity with .NET Core, and C# and frameworks. Experience in data engineering - building and managing Data Pipelines, ETL processes, and familiarity with various technologies that drive them: Kafka, FiveTran (Optional), Spark/Scala (Optional), etc. Data warehouse experience with Snowflake, or similar (AWS Redshift, Apache Iceberg, Clickhouse, etc). Familiarity with RESTFul microservice-based APIs Experience in modern CI/CD pipelines and infrastructure (Jenkins, Github Actions, Terraform, Kubernetes) a big plu (or equivalent) Experience with the SCRUM and the AGILE development process. Familiarity developing in cloud-based environments Optional: Experience with 3rd party integrations Optional: familiarity with Meeting systems like Zoom, WebEx, MS Teams Optional: familiarity with CRM systems like Salesforce, Microsoft Dynamics 365, Hubspot. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France