Jobs
Interviews

2905 Dynamodb Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

100.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Your future team To become a 100 year company, we need a world-class engineering organisation of empowered teams with the tools and infrastructure to do the best work of their careers. As a part of a unified R&D team, Engineering is prioritising key initiatives which support our customers as they increase their adoption of Atlassian Cloud products and services while continuing to support their current needs at extreme enterprise scale. We're looking for people who want to write the future and who believe that we can accomplish so much more together. You will report to one of the Engineering Managers of the R&D teams. What You'll Do Build and ship features and capabilities daily in highly scalable, cross-geo distributed environment Be part of an amazing open and collaborative work environment with other experienced engineers, architects, product managers, and designers Review code with best practices of readability, testing patterns, documentation, reliability, security, and performance considerations in mind Mentor and level up the skills of your teammates by sharing your expertise in formal and informal knowledge sharing sessions Ensure full visibility, error reporting, and monitoring of high performing backend services Participate in Agile software development including daily stand-ups, sprint planning, team retrospectives, show and tell demo sessions Your background 4+ years of experience building and developing backend applications Bachelor's or Master's degree with a preference for Computer Science degree Experience crafting and implementing highly scalable and performant RESTful micro-services Proficiency in any modern object-oriented programming language (e.g., Java, Kotlin, Go, Scala, Python, etc.) Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra) Strong understanding of CI/CD reliability principles, including test strategy, security, and performance benchmarking. Real passion for collaboration and strong interpersonal and communication skills Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure) Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality Qualifications Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh .

Posted 1 day ago

Apply

3.0 years

0 Lacs

India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in. Job Description REQUIREMENTS: Total Experience 3+years. Strong working experience in backend development with Java and Spring Boot. Hands-on experience with RESTful APIs, JMS, JPA, Spring MVC, Hibernate. Strong understanding of messaging systems (Kafka, SQS) and caching technologies (Redis). Experience with SQL (Aurora MySQL) and NoSQL databases (Cassandra, DynamoDB, Elasticsearch). Proficient with CI/CD pipelines, Java build tools, and modern DevOps practices. Exposure to AWS services like EC2, S3, RDS, DynamoDB, EMR. Familiarity with Kubernetes-based orchestration and event-driven architecture. Experience working in Agile environments with minimal supervision. Experience with observability tools and performance tuning. Understanding of orchestration patterns and microservice architecture. Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the clients needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in. Job Description REQUIREMENTS: Total Experience 5+years. Strong working experience in backend development with Java and Spring Boot. Hands-on experience with RESTful APIs, JMS, JPA, Spring MVC, Hibernate. Strong understanding of messaging systems (Kafka, SQS) and caching technologies (Redis). Experience with SQL (Aurora MySQL) and NoSQL databases (Cassandra, DynamoDB, Elasticsearch). Proficient with CI/CD pipelines, Java build tools, and modern DevOps practices. Exposure to AWS services like EC2, S3, RDS, DynamoDB, EMR. Familiarity with Kubernetes-based orchestration and event-driven architecture. Experience working in Agile environments with minimal supervision. Experience with observability tools and performance tuning. Understanding of orchestration patterns and microservice architecture. Strong communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the clients needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

We are looking for a Python Developer (AWS) with a minimum of 4 years of experience to join our team located across Pan India. If you have a passion for cloud technologies, API development, and backend systems, this opportunity is ideal for you! As a Python + AWS Developer, you should possess the following key skills: - Strong proficiency in Python programming - Hands-on experience with various AWS services including S3, Lambda, Glue, VPC, and EC2 - Proficiency in REST API development and deployment - Knowledge of Docker and layers - Good understanding of SQL Server - Familiarity with authentication and authorization mechanisms - Experience with DynamoDB is a plus - Bonus points for experience with Neoload, Dynatrace, CI/CD tools, and OpenShift - Additional experience in SFTP API, file server management, and batch processing is beneficial Joining our team offers you: - A collaborative work environment with ample growth opportunities - Hands-on experience with cutting-edge technologies - Immediate joining opportunities within 15 days If you meet the requirements and are excited about this opportunity, please send your resume to uzair.ahmed@obligeit.com. Don't miss out on this chance or feel free to refer someone who might be interested! #Python #AWS #DevOps #Cloud #Hiring #Developers #ImmediateJoiner #AWSLambda #S3 #DynamoDB #APIDevelopment #Docker #CI/CD #SQLServer #TechJobs #JobOpening #OpenShift #BackendDeveloper #PythonDeveloper #CloudJobs,

Posted 1 day ago

Apply

3.0 years

0 Lacs

Roorkee, Uttarakhand, India

Remote

Company Description Miratech helps visionaries change the world. We are a global IT services and consulting company that brings together enterprise and start-up innovation. Today, we support digital transformation for some of the world's largest enterprises. By partnering with both large and small players, we stay at the leading edge of technology, remain nimble even as a global leader, and create technology that helps our clients further enhance their business. We are a values-driven organization and our culture of Relentless Performance has enabled over 99% of Miratech's engagements to succeed by meeting or exceeding our scope, schedule, and/or budget objectives since our inception in 1989. Miratech has coverage across 5 continents and operates in over 25 countries around the world. Miratech retains nearly 1000 full-time professionals, and our annual growth rate exceeds 25%. Job Description We are seeking a skilled and collaborative Conversational AI Developer to join our team focused on IVR modernization and migration to next-gen voice IVR / bots. This role involves designing, developing, and integrating platforms like Google Dialogflow. You will work closely with cross-functional teams to ensure seamless transitions from legacy IVR systems to modern, AI-powered solutions. Responsibilities: Design and develop conversational experiences using Dialogflow CX/ES. Build and maintain backend services, integrations using Python, Node.js, or JavaScript. Implement NLP and ML models using industry-standard libraries and frameworks. Develop and optimize prompts for natural and effective user interactions. Integrate bots with external systems via RESTful APIs, webhooks, and middleware. Collaborate with Dev, QE, and support teams to ensure smooth IVR-to-bot migration and end-to-end testing. Monitor and troubleshoot production issues using tools like Cloud Logging, CloudWatch, and DynamoDB. Contribute to the design of scalable, secure, and maintainable bot architectures. Qualifications 3+ years of experience of Dialogflow CX and Google CCAI. Proficiency in Python, Node.js, or JavaScript. Strong understanding of Natural Language Processing (NLP) and related ML frameworks. Experience with prompt engineering for conversational design. Solid knowledge of API integration, including RESTful services and middleware. Familiarity with IVR systems and experience in IVR-to-bot migration projects. Strong teamwork and communication skills, with a collaborative mindset. Nice to have: Experience in contact center technologies and operating models. Experience with Amazon Lex, AWS Lambda, Connect, IAM, CloudWatch, DynamoDB. Background in performance monitoring and incident management in production environments. Exposure to Agile development practices and CI/CD pipelines. We offer: Culture of Relentless Performance: join an unstoppable technology development team with a 99% project success rate and more than 30% year-over-year revenue growth. Competitive Pay and Benefits: enjoy a comprehensive compensation and benefits package, including health insurance, language courses, and a relocation program. Work From Anywhere Culture: make the most of the flexibility that comes with remote work. Growth Mindset: reap the benefits of a range of professional development opportunities, including certification programs, mentorship and talent investment programs, internal mobility and internship opportunities. Global Impact: collaborate on impactful projects for top global clients and shape the future of industries. Welcoming Multicultural Environment: be a part of a dynamic, global team and thrive in an inclusive and supportive work environment with open communication and regular team-building company social events. Social Sustainability Values: join our sustainable business practices focused on five pillars, including IT education, community empowerment, fair operating practices, environmental sustainability, and gender equality. Miratech is an equal opportunity employer and does not discriminate against any employee or applicant for employment on the basis of race, color, religion, sex, national origin, age, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable law.

Posted 1 day ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As a Senior Solution Architect in Database Technology based in Bengaluru, India, with 7 to 10 years of experience, you will be responsible for leading the design and implementation of scalable, high-performance, and secure database solutions. Your expertise will be crucial in providing recommendations on the selection and integration of various database technologies, including relational, NoSQL, and cloud-native platforms. Developing detailed system architecture blueprints, data models, schemas, and integration flows will be part of your role to ensure alignment with business use cases and performance objectives. Collaboration with stakeholders, technical teams, and project managers will be essential to translate business requirements into technical solutions. Your involvement in pre-sales activities, technical presentations, estimations, and scoping documentation will support client consultations. Additionally, advising clients on best practices for database management, performance optimization, and scalability will contribute to successful project outcomes. Embracing technology leadership, you will lead the evaluation and adoption of emerging database technologies and trends while guiding junior architects and developers in applying best practices. Overseeing performance optimization and ensuring database systems meet defined SLAs will be critical to maintaining operational efficiency. Your role will involve ensuring successful end-to-end implementation of database solutions across multiple projects and managing third-party database vendors and tool integrations effectively. Contributing to project roadmaps and ensuring alignment with timelines, budgets, and architectural goals will be part of your responsibilities. Maintaining security and compliance will be a key aspect of your role, ensuring adherence to industry regulations such as GDPR, HIPAA, and SOC 2. Implementing data protection strategies, conducting regular security and performance audits, and overseeing disaster recovery plans for all database systems will be essential. To excel in this role, you should have strong hands-on experience with relational databases like Oracle, SQL Server, MySQL, and PostgreSQL, as well as experience with NoSQL databases such as MongoDB, Cassandra, and DynamoDB. Proficiency in cloud-based database services like AWS RDS, Azure SQL Database, and Google Cloud SQL is required. Expertise in designing and scaling transactional and analytical workloads, familiarity with ETL, data warehousing, and integration tools, and knowledge of architecture tools like Liquibase, Flyway, Docker, Kubernetes, and database monitoring tools are essential. Your leadership and communication skills will play a significant role in mentoring cross-functional teams, stakeholder management, and handling multiple projects effectively. Preferred certifications include Oracle Certified Architect or similar, as well as AWS/Azure/GCP Solutions Architect certifications. If you are a skilled professional with a passion for designing large-scale database architectures, optimizing performance, and ensuring security and compliance, this role offers an exciting opportunity to lead database technology solutions and drive innovation in a dynamic environment.,

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experince Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417810 Relocation Package Yes

Posted 1 day ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Leads projects for design, development and maintenance of a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with key business stakeholders, IT experts and subject-matter experts to plan, design and deliver optimal analytics and data science solutions. Works on one or many product teams at a time. Key Responsibilities Designs and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Designs and implements framework to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Designs and provide guidance on building reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Designs and implements physical data models to define the database structure. Optimizing database performance through efficient indexing and table relationships. Participates in optimizing, testing, and troubleshooting of data pipelines. Designs, develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses innovative and modern tools, techniques and architectures to partially or completely automate the most-common, repeatable and tedious data preparation and integration tasks in order to minimize manual and error-prone processes and improve productivity. Assists with renovating the data management infrastructure to drive automation in data integration and management. Ensures the timeliness and success of critical analytics initiatives by using agile development technologies such as DevOps, Scrum, Kanban Coaches and develops less experienced team members. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience Intermediate experience in a relevant discipline area is required. Knowledge of the latest technologies and trends in data engineering are highly preferred and includes: 5-8 years of experience Familiarity analyzing complex business systems, industry requirements, and/or data regulations Background in processing and managing large data sets Design and development for a Big Data platform using open source and third-party tools SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Experience developing applications requiring large file movement for a Cloud-based environment and other data extraction tools and methods from a variety of sources Experience in building analytical solutions Intermediate Experiences In The Following Are Preferred Experience with IoT technology Experience in Agile software development Qualifications Work closely with business Product Owner to understand product vision. 2) Play a key role across DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Independently design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation, maintenence and management of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP). 6) Take part in evaluation of new data tools, POCs and provide suggestions. 7) Take full ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization. 8) Proactively address and resolve issues that compromise data accuracy and usability. Preferred Skills Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. Data Replication: Working knowledge of replication technologies like Qlik Replicate is a plus API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417809 Relocation Package Yes

Posted 1 day ago

Apply

4.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Description GPP Database Link (https://cummins365.sharepoint.com/sites/CS38534/) Job Summary Supports, develops and maintains a data and analytics platform. Effectively and efficiently process, store and make data available to analysts and other consumers. Works with the Business and IT teams to understand the requirements to best leverage the technologies to enable agile data delivery at scale. Key Responsibilities Implements and automates deployment of our distributed system for ingesting and transforming data from various types of sources (relational, event-based, unstructured). Implements methods to continuously monitor and troubleshoot data quality and data integrity issues. Implements data governance processes and methods for managing metadata, access, retention to data for internal and external users. Develops reliable, efficient, scalable and quality data pipelines with monitoring and alert mechanisms that combine a variety of sources using ETL/ELT tools or scripting languages. Develops physical data models and implements data storage architectures as per design guidelines. Analyzes complex data elements and systems, data flow, dependencies, and relationships in order to contribute to conceptual physical and logical data models. Participates in testing and troubleshooting of data pipelines. Develops and operates large scale data storage and processing solutions using different distributed and cloud based platforms for storing data (e.g. Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB, others). Uses agile development technologies, such as DevOps, Scrum, Kanban and continuous improvement cycle, for data driven application. Responsibilities Competencies: System Requirements Engineering - Uses appropriate methods and tools to translate stakeholder needs into verifiable requirements to which designs are developed; establishes acceptance criteria for the system of interest through analysis, allocation and negotiation; tracks the status of requirements throughout the system lifecycle; assesses the impact of changes to system requirements on project scope, schedule, and resources; creates and maintains information linkages to related artifacts. Collaborates - Building partnerships and working collaboratively with others to meet shared objectives. Communicates effectively - Developing and delivering multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer focus - Building strong customer relationships and delivering customer-centric solutions. Decision quality - Making good and timely decisions that keep the organization moving forward. Data Extraction - Performs data extract-transform-load (ETL) activities from variety of sources and transforms them for consumption by various downstream applications and users using appropriate tools and technologies. Programming - Creates, writes and tests computer code, test scripts, and build scripts using algorithmic analysis and design, industry standards and tools, version control, and build and test automation to meet business, technical, security, governance and compliance requirements. Quality Assurance Metrics - Applies the science of measurement to assess whether a solution meets its intended outcomes using the IT Operating Model (ITOM), including the SDLC standards, tools, metrics and key performance indicators, to deliver a quality product. Solution Documentation - Documents information and solution based on knowledge gained as part of product development activities; communicates to stakeholders with the goal of enabling improved productivity and effective knowledge transfer to others who were not originally part of the initial learning. Solution Validation Testing - Validates a configuration item change or solution using the Function's defined best practices, including the Systems Development Life Cycle (SDLC) standards, tools and metrics, to ensure that it works as designed and meets customer requirements. Data Quality - Identifies, understands and corrects flaws in data that supports effective information governance across operational business processes and decision making. Problem Solving - Solves problems and may mentor others on effective problem solving by using a systematic analysis process by leveraging industry standard methodologies to create problem traceability and protect the customer; determines the assignable cause; implements robust, data-based solutions; identifies the systemic root causes and ensures actions to prevent problem reoccurrence are implemented. Values differences - Recognizing the value that different perspectives and cultures bring to an organization. Education, Licenses, Certifications College, university, or equivalent degree in relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Experience 4-5 Years of experience. Relevant experience preferred such as working in a temporary student employment, intern, co-op, or other extracurricular team activities. Knowledge of the latest technologies in data engineering is highly preferred and includes: Exposure to Big Data open source SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka or equivalent college coursework SQL query language Clustered compute cloud-based implementation experience Familiarity developing applications requiring large file movement for a Cloud-based environment Exposure to Agile software development Exposure to building analytical solutions Exposure to IoT technology Qualifications Work closely with business Product Owner to understand product vision. 2) Participate in DBU Data & Analytics Power Cells to define, develop data pipelines for efficient data transport into Cummins Digital Core ( Azure DataLake, Snowflake). 3) Collaborate closely with AAI Digital Core and AAI Solutions Architecture to ensure alignment of DBU project data pipeline design standards. 4) Work under limited supervision to design, develop, test, implement complex data pipelines from transactional systems (ERP, CRM) to Datawarehouses, DataLake. 5) Responsible for creation of DBU Data & Analytics data engineering documentation and standard operating procedures (SOP) with guidance and help from senior data engineers. 6) Take part in evaluation of new data tools, POCs with guidance and help from senior data engineers. 7) Take ownership of the developed data pipelines, providing ongoing support for enhancements and performance optimization under limited supervision. 8) Assist to resolve issues that compromise data accuracy and usability. Programming Languages: Proficiency in languages such as Python, Java, and/or Scala. Database Management: Intermediate level expertise in SQL and NoSQL databases. Big Data Technologies: Experience with Hadoop, Spark, Kafka, and other big data frameworks. Cloud Services: Experience with Azure, Databricks and AWS cloud platforms. ETL Processes: Strong understanding of Extract, Transform, Load (ETL) processes. API: Working knowledge of API to consume data from ERP, CRM Job Systems/Information Technology Organization Cummins Inc. Role Category Remote Job Type Exempt - Experienced ReqID 2417808 Relocation Package Yes

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Who You'll Work With Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package to enable holistic well-being for you and your family. Your Impact You will be developing PlanetView, a Software-as-a-Service platform which helps the financial sector understand and manage climate change risks and quantify carbon emissions. You will work alongside our physical and transition risk modelling teams, following agile processes, to bring analytical approaches and features to production. We work with financial data, terabytes of global climate data and a wide range of environmental, social and corporate governance (ESG) data and integrate them in our class-leading advanced economic models. In this role, you will be responsible to maintain and scale capabilities of Django, Django REST Framework based core application, and several backend microservices which are based on FastAPI and Pydantic. You’ll have the opportunity to significantly influence the design of our backend processes. You will also manage your day-to-day priorities, time and commitments within your team setting while ensuring that technical standards and best practices are exercised. Lastly, you will apply new knowledge and innovation to the existing codebase. At Planetrics, you will be at the forefront of new technologies, applying best practices into development of PlanetView solution. You will deliver a real impact by identifying potential risks and capturing strategic opportunities of different climate-change policies and climate-related technologies worldwide. You will work in the environment that puts sustainability, diversity and digital transformation at the heart of what we do Your Qualifications and Skills Degree in Computer Science, Engineering, Mathematics, Quantitative Methods, or a related field. Proven track record of developing and maintaining production-level code. Strong proficiency in Python, with a focus on writing clean, efficient, and production-ready code. Deep expertise in building enterprise application using Django, Django REST Framework, FastAPI ,and Pydantic frameworks. Extensive experience with relational databases, SQL and Django ORM. Working knowledge of data analytics and visualisation Python libraries such as Pandas, Numpy, Polars, Plotly. Hands-on experience designing and implementing microservice architectures and distributed systems. Practical knowledge of AWS services such as EKS, RDS, Lambda, S3, DynamoDB, ElasticCache, SQS, AWS Batch and S3. Passion for automation, with experience in containerisation (e.g. Docker), shell scripting, and CI/CD pipelines, including GitHub actions. Solid understanding of software engineering best practices throughout the development lifecycle, including Agile methodologies, coding standards, peer code reviews, version control, build processes, testing, and deployment.

Posted 1 day ago

Apply

2.0 - 4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company Profile DataNeuron is headquartered in San Francisco with offices in New Delhi and focuses on the development and scaling of AI models. The company offers a data-centric, end-to-end solution for the Annotation, Training, and Management of Machine Learning models and LLMs. With DataNeuron, customers can quickly and easily produce prediction algorithms without writing any code. Our team is made up of an innovative group of data scientists, engineers, and program managers united by a passion for solving challenging problems in NLP at scale. We’re committed to ensuring that our components deliver the quality and performance that our customers expect. About The Role As a Full Stack Engineer (MERN) at DataNeuron, you will play a pivotal role in the design, development, deployment, and maintenance of our cutting-edge applications and their various modules. Located in New Delhi, India, this position offers an exciting opportunity for individuals with 2-4 years of experience to contribute to a dynamic team dedicated to accelerating and scaling the development of AI models. Location - Hybrid (Noida, India) ; Experience - 2 - 4 Years Responsibilities : Spearhead the end-to-end development lifecycle of multiple application modules, showcasing ownership and technical leadership. Collaborate seamlessly with cross-functional tech teams to architect, develop, and maintain our suite of innovative software products. Demonstrate prowess in coding, debugging, and troubleshooting, ensuring the delivery of high-quality solutions. Efficiently manage deadlines and deliverables, exhibiting a results-oriented approach. Foster a proactive mindset for continuous personal and professional growth, stepping out of your comfort zone to embrace challenges. Actively contribute to architectural and design discussions, bringing creative insights to enhance the overall product quality. Cultivate strong communication skills to articulate ideas effectively within the team, contributing to the creation of a product that is a true work of art. Requirements : Proficient command of JavaScript, TypeScript , and Node.js , showcasing an ability to apply these languages effectively in practical solutions Proven hands-on experience with Single Page Application technologies, including React.js, Tailwind css, and zustand , coupled with a strong foundation in unit testing. Proven experience in design and development of complex data intensive systems. Solid understanding and practical application of NoSQL Databases such as MongoDB , DynamoDB, etc. Demonstrated capability to independently design robust products and features, showcasing creativity and problem-solving skills. Hands-on experience with cloud providers, especially AWS , is required. Expertise in deploying applications using Kubernetes and Docker , ensuring seamless integration with the broader tech environment. Proficiency in CI/CD practices and bash scripting, ensuring efficient and automated development workflows. Advanced knowledge of GIT and GITHUB Actions , facilitating collaborative and version controlled development. Familiarity with monitoring tools like Prometheus and expertise in logging tools such as Elastic Stack (ELK) and Loki. Sound understanding of Vnet/Subnet planning and networking, coupled with expertise in VPC Security.

Posted 1 day ago

Apply

4.0 years

20 - 30 Lacs

India

Remote

Job Title: Senior Golang Backend Developer Company Type: IT Services CompanyEmployment Type: Full-Time Location: Ahmedabad / Rajkot (Preferred) or 100% Remote (Open) Experience Required: 4+ Years (Minimum 3.5 years of hands-on experience with Golang) About The Role We are hiring a Senior Golang Backend Developer, a leading service-based tech company based in Ahmedabad. If you're a passionate backend engineer who thrives in building scalable APIs, working on microservices architecture, and deploying applications using serverless frameworks on AWS, this role is for you! This is a full-time opportunity and while we prefer candidates who can work from Ahmedabad or Rajkot, we're also open to 100% remote working for the right candidate. Key Responsibilities Design, build, and maintain RESTful APIs and backend services using Golang Develop scalable solutions using Microservices Architecture Optimize system performance, reliability, and maintainability Work with AWS Cloud Services (Lambda, SQS, SNS, S3, DynamoDB, etc.) and implement Serverless Architecture Ensure clean, maintainable code through best practices and code reviews Collaborate with cross-functional teams for smooth integration and architecture decisions Monitor, troubleshoot, and improve application performance using observability tools Implement CI/CD pipelines and participate in Agile development practices Required Skills & Experience 4+ years of total backend development experience 3.5+ years of strong, hands-on experience with Golang Proficient in designing and developing RESTful APIs Solid understanding and implementation experience of Microservices Architecture Proficient in AWS cloud services, especially: o Lambda, SQS, SNS, S3, DynamoDB Experience with Serverless Architecture Familiarity with Docker, Kubernetes, GitHub Actions/GitLab CI Understanding of concurrent programming and performance optimization Experience with observability and monitoring tools (e.g., DataDog, Prometheus, New Relic, OpenTelemetry) Strong communication skills and ability to work in Agile teams Fluency in English communication is a must Nice to Have Experience with Domain-Driven Design (DDD) Familiarity with automated testing frameworks (TDD/BDD) Prior experience working in distributed remote teams Why You Should Apply Opportunity to work with modern tools and cloud-native technologies Flexibility to work remotely or from Ahmedabad/Rajkot Supportive, collaborative, and inclusive team culture Competitive salary with opportunities for growth and upskilling Skills: aws lambda,serverless architecture,observability tools,behavior-driven development (bdd),amazon sqs,go (golang),domain-driven design (ddd),golang,aws cloud services,restful apis,aws,gitlab ci,microservices architecture,github actions,aws cwi,automated testing frameworks,microservices,kubernetes,docker,aws (do not use tag amazon web services)

Posted 1 day ago

Apply

5.0 - 7.0 years

4 - 10 Lacs

Hyderābād

On-site

Description The U.S. Pharmacopeial Convention (USP) is an independent scientific organization that collaborates with the world's top authorities in health and science to develop quality standards for medicines, dietary supplements, and food ingredients. USP's fundamental belief that Equity = Excellence manifests in our core value of Passion for Quality through our more than 1,300 hard-working professionals across twenty global locations to deliver the mission to strengthen the supply of safe, quality medicines and supplements worldwide. At USP, we value inclusivity for all. We recognize the importance of building an organizational culture with meaningful opportunities for mentorship and professional growth. From the standards we create, the partnerships we build, and the conversations we foster, we affirm the value of Diversity, Equity, Inclusion, and Belonging in building a world where everyone can be confident of quality in health and healthcare. USP is proud to be an equal employment opportunity employer (EEOE) and affirmative action employer. We are committed to creating an inclusive environment in all aspects of our work—an environment where every employee feels fully empowered and valued irrespective of, but not limited to, race, ethnicity, physical and mental abilities, education, religion, gender identity, and expression, life experience, sexual orientation, country of origin, regional differences, work experience, and family status. We are committed to working with and providing reasonable accommodation to individuals with disabilities. Brief Job Overview The Digital & Innovation group at USP is seeking a Full Stack Developers with programming skills in Cloud technologies to be able to build innovative digital products. We are seeking someone who understands the power of Digitization and help drive an amazing digital experience to our customers. How will YOU create impact here at USP? In this role at USP, you contribute to USP's public health mission of increasing equitable access to high-quality, safe medicine and improving global health through public standards and related programs. In addition, as part of our commitment to our employees, Global, People, and Culture, in partnership with the Equity Office, regularly invests in the professional development of all people managers. This includes training in inclusive management styles and other competencies necessary to ensure engaged and productive work environments. The Sr. Software Engineer/Software Engineer has the following responsibilities: Build scalable applications/ platforms using cutting edge cloud technologies. Constantly review and upgrade the systems based on governance principles and security policies. Participate in code reviews, architecture discussions, and agile development processes to ensure high-quality, maintainable, and scalable code. Document and communicate technical designs, processes, and solutions to both technical and non-technical stakeholders Who is USP Looking For? The successful candidate will have a demonstrated understanding of our mission, commitment to excellence through inclusive and equitable behaviors and practices, ability to quickly build credibility with stakeholders, along with the following competencies and experience: Education Bachelor's or Master's degree in Computer Science, Engineering, or a related field Experience Sr. Software Engineer: 5-7 years of experience in software development, with a focus on cloud computing Software Engineer: 2-4 years of experience in software development, with a focus on cloud computing Strong knowledge of cloud platforms (e.g., AWS , Azure, Google Cloud) and services, including compute, storage, networking, and security Extensive knowledge on Java spring boot applications and design principles. Strong programming skills in languages such as Python Good experience with AWS / Azure services, such as EC2, S3, IAM, Lambda, RDS, DynamoDB, API Gateway, and Cloud Formation Knowledge of cloud architecture patterns, best practices, and security principles Familiarity with data pipeline / ETL / Orchestration tools, such as Apache NiFi, AWS Glue, or Apache Airflow. Good experience with front end technologies like React.js/Node.js etc Strong experience in micro services, automated testing practices. Experience leading initiatives related to continuous improvement or implementation of new technologies. Works independently on most deliverables Strong analytical and problem-solving skills, with the ability to develop creative solutions to complex problems Ability to manage multiple projects and priorities in a fast-paced, dynamic environment Additional Desired Preferences Experience with scientific chemistry nomenclature or prior work experience in life sciences, chemistry, or hard sciences or degree in sciences Experience with pharmaceutical datasets and nomenclature Experience with containerization technologies, such as Docker and Kubernetes, is a plus Experience working with knowledge graphs Ability to explain complex technical issues to a non-technical audience Self-directed and able to handle multiple concurrent projects and prioritize tasks independently Able to make tough decisions when trade-offs are required to deliver results Strong communication skills required: Verbal, written, and interpersonal Supervisory Responsibilities No Benefits USP provides the benefits to protect yourself and your family today and tomorrow. From company-paid time off and comprehensive healthcare options to retirement savings, you can have peace of mind that your personal and financial well-being is protected Who is USP? The U.S. Pharmacopeial Convention (USP) is an independent scientific organization that collaborates with the world's top authorities in health and science to develop quality standards for medicines, dietary supplements, and food ingredients. USP's fundamental belief that Equity = Excellence manifests in our core value of Passion for Quality through our more than 1,300 hard-working professionals across twenty global locations to deliver the mission to strengthen the supply of safe, quality medicines and supplements worldwide. At USP, we value inclusivity for all. We recognize the importance of building an organizational culture with meaningful opportunities for mentorship and professional growth. From the standards we create, the partnerships we build, and the conversations we foster, we affirm the value of Diversity, Equity, Inclusion, and Belonging in building a world where everyone can be confident of quality in health and healthcare. USP is proud to be an equal employment opportunity employer (EEOE) and affirmative action employer. We are committed to creating an inclusive environment in all aspects of our work—an environment where every employee feels fully empowered and valued irrespective of, but not limited to, race, ethnicity, physical and mental abilities, education, religion, gender identity, and expression, life experience, sexual orientation, country of origin, regional differences, work experience, and family status. We are committed to working with and providing reasonable accommodation to individuals with disabilities.

Posted 1 day ago

Apply

3.0 years

6 - 8 Lacs

Hyderābād

On-site

- 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling The ShipTech BI team is looking for a smart and ambitious individual to support developing the operational reporting structure in Amazon Logistics. The potential candidate will support analysis, improvement and creation of metrics and dashboards on Transportation by Amazon, In addition, they will work with internal customers at all levels of the organization – Operations, Customer service, HR, Technology, Operational Research. The potential candidate will enjoy the challenges and rewards of working in a fast-growing organization. This is a high visibility position. As an Amazon Data Business Intelligence Engineer you will be working in one of the world's largest and most complex data warehouse environments. You should have deep expertise in the design, creation, management, and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in understanding the business requirements and implementing reporting solutions. Above all you should be passionate about bringing large datasets together to answer business questions and drive change. Key Responsibilities: - Design automated solutions for recurrent reporting (daily/weekly/monthly). - Design automated processes for in-depth analysis databases. - Design automated data control processes. - Collaborate with the software development team to build the designed solutions. - Learn, publish, analyze and improve management information dashboards, operational business metrics decks and key performance indicators. - Improve tools, processes, scale existing solutions, create new solutions as required based on stakeholder needs. - Provide in-depth analysis to management with the support of accounting, finance, transportation and supply chain teams. - Participate in annual budgeting and forecasting efforts. - Perform monthly variance analysis and identify risks & opportunities. Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 day ago

Apply

10.0 years

8 - 10 Lacs

Hyderābād

On-site

Full-time Employee Status: Regular Role Type: Hybrid Department: Information Technology & Systems Schedule: Full Time Company Description Experian is a global data and technology company, powering opportunities for people and businesses around the world. We help to redefine lending practices, uncover and prevent fraud, simplify healthcare, create marketing solutions, and gain deeper insights into the automotive market, all using our unique combination of data, analytics and software. We also assist millions of people to realize their financial goals and help them save time and money. We operate across a range of markets, from financial services to healthcare, automotive, agribusiness, insurance, and many more industry segments. We invest in people and new advanced technologies to unlock the power of data. As a FTSE 100 Index company listed on the London Stock Exchange (EXPN), we have a team of 22,500 people across 32 countries. Our corporate headquarters are in Dublin, Ireland. Learn more at experianplc.com. Job Description Responsibilities: Team Leadership & Delivery: Manage, coach, and grow engineering teams responsible for backend services, data streaming, and API integrations within our fintech platform. Ensure successful delivery of end to end scalable, reliable, and secure systems built with Java, AWS cloud services along with Frontend Technologies (Mobile Native + Web). Collaborate with Engineers to review architecture, set technical direction, and evolve infrastructure based on business priorities. Provide oversight and support to engineers building microservices, reactive systems, and real-time data pipelines. Ensure technical deliverables align with non-functional requirements such as performance, compliance, privacy, and operational SLAs. Culture & Quality: Cultivate an environment of engineering ownership, psychological safety, and continuous improvement. Drive adoption of modern software engineering practices including test automation, CI/CD, API-first design, and secure coding. Lead by example in performing code reviews, advocating for clean architecture, and encouraging TDD and documentation discipline. Foster strong collaboration with Product, Design, Security, and DevOps to deliver robust, user-centric financial applications. Talent Development & Technical Guidance: Guide engineers in mastering tools like Gradle, JDK compatibility, and dependency optimization. Mentor team members on best practices in cloud-native development, reactive programming, and scalable system design. Identify and grow future leaders while maintaining a high bar for performance and inclusivity. Lead team hiring, onboarding, and career development efforts tailored to emerging engineering talent in the region. Operational Excellence: Monitor engineering metrics (velocity, quality, uptime, defect rate) and ensure adherence to agile workflows. Partner with engineering leadership to drive platform strategy and improve the developer experience. Stay current on regulatory and compliance considerations including GDPR, PCI, ISO 27001, and others relevant to fintech. Qualifications 10+ years of software development experience, with 3+ years managing engineering teams in fast-paced, agile environments. Strong technical background in Java, Spring Boot, AWS, and distributed systems architecture. Hands-on understanding of technologies like GraphQL, Kafka, DynamoDB, Lambda, Kinesis, and microservices. Proven track record of shipping high-impact products in regulated or high-availability domains such as fintech or e-commerce. Ability to lead teams working across real-time data systems, cloud infrastructure, and API ecosystems. Strong communication and interpersonal skills with a collaborative leadership style. Preferred Experience: Built and scaled engineering teams supporting financial services or other highly regulated platforms. Familiarity with mass-market consumer apps and retail-scale backend systems. Deep knowledge of compliance standards (PCI, HIPAA, CCPA, etc.) and secure development practices. Experience working with distributed teams across time zones and geographies. Additional Information Our uniqueness is that we celebrate yours. Experian's culture and people are important differentiators. We take our people agenda very seriously and focus on what matters; DEI, work/life balance, development, authenticity, collaboration, wellness, reward & recognition, volunteering... the list goes on. Experian's people first approach is award-winning; World's Best Workplaces™ 2024 (Fortune Top 25), Great Place To Work™ in 24 countries, and Glassdoor Best Places to Work 2024 to name a few. Check out Experian Life on social or our Careers Site to understand why. Experian is proud to be an Equal Opportunity and Affirmative Action employer. Innovation is an important part of Experian's DNA and practices, and our diverse workforce drives our success. Everyone can succeed at Experian and bring their whole self to work, irrespective of their gender, ethnicity, religion, colour, sexuality, physical ability or age. If you have a disability or special need that requires accommodation, please let us know at the earliest opportunity. Experian Careers - Creating a better tomorrow together

Posted 1 day ago

Apply

5.0 years

4 - 8 Lacs

Hyderābād

On-site

Job Information Date Opened 07/31/2025 Job Type Full time Work Experience 5+ years Industry Technology City Hyderabad State/Province Telangana Country India Zip/Postal Code 500032 Job Description About Kanerika: Who we are: Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI. We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth. Awards and Recognitions Kanerika has won several awards over the years, including: CMMI Level 3 Appraised in 2024. Best Place to Work 2022 & 2023 by Great Place to Work®. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA today. NASSCOM Emerge 50 Award in 2014. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture. Recognized for ISO 27701, 27001, SOC2, and GDPR compliances. Featured as Top Data Analytics Services Provider by GoodFirms. Working for us Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees. Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika. Locations We are located in Austin (USA), Singapore, Hyderabad, Indore and Ahmedabad (India). Job Location: Hyderabad, Indore and Ahmedabad (India) Requirements Key Responsibilities: Lead the design and development of AI-driven applications, particularly focusing on RAG-based chatbot solutions. Architect robust solutions leveraging Python and Java to ensure scalability, reliability, and maintainability. Deploy, manage, and scale AI applications using AWS cloud infrastructure, optimizing performance and resource utilization. Collaborate closely with cross-functional teams to understand requirements, define project scopes, and deliver solutions effectively. Mentor team members, providing guidance on best practices in software development, AI methodologies, and cloud deployments. Ensure solutions meet quality standards, including thorough testing, debugging, performance tuning, and documentation. Continuously research emerging AI technologies and methodologies to incorporate best practices and innovation into our products. - Required Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Data Science, Mathematics, Statistics or related fields. At least 5 years of professional experience in AI/Machine Learning engineering. Strong programming skills in Python and Java. Demonstrated hands-on experience building Retrieval-Augmented Generation (RAG)-based chatbots or similar generative AI applications. Proficiency in cloud platforms, particularly AWS, including experience with EC2, Lambda, SageMaker, DynamoDB, CloudWatch, and API Gateway. Solid understanding of AI methodologies, including natural language processing (NLP), vector databases, embedding models, and large language model integrations. Experience with leading projects or teams, managing technical deliverables, and ensuring high-quality outcomes. AWS certifications (e.g., AWS Solutions Architect, AWS Machine Learning Specialty). Familiarity with popular AI/ML frameworks and libraries such as Hugging Face Transformers, TensorFlow, PyTorch, LangChain, or similar. Experience in Agile development methodologies. Excellent communication skills, capable of conveying complex technical concepts clearly and effectively. Strong analytical and problem-solving capabilities, with the ability to navigate ambiguous technical challenges. Benefits Employee Benefits 1. Culture: a. Open Door Policy: Encourages open communication and accessibility to management. b. Open Office Floor Plan: Fosters a collaborative and interactive work environment. c. Flexible Working Hours: Allows employees to have flexibility in their work schedules. d. Employee Referral Bonus: Rewards employees for referring qualified candidates. e. Appraisal Process Twice a Year: Provides regular performance evaluations and feedback. 2. Inclusivity and Diversity: a. Hiring practices that promote diversity: Ensures a diverse and inclusive workforce. b. Mandatory POSH training: Promotes a safe and respectful work environment. 3. Health Insurance and Wellness Benefits: a. GMC and Term Insurance: Offers medical coverage and financial protection. b. Health Insurance: Provides coverage for medical expenses. c. Disability Insurance: Offers financial support in case of disability. 4. Child Care & Parental Leave Benefits: a. Company-sponsored family events: Creates opportunities for employees and their families to bond. b. Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child. c. Family Medical Leave: Offers leave for employees to take care of family members' medical needs. 5. Perks and Time-Off Benefits: a. Company-sponsored outings: Organizes recreational activities for employees. b. Gratuity: Provides a monetary benefit as a token of appreciation. c. Provident Fund: Helps employees save for retirement. d. Generous PTO: Offers more than the industry standard for paid time off. e. Paid sick days: Allows employees to take paid time off when they are unwell. f. Paid holidays: Gives employees paid time off for designated holidays. g. Bereavement Leave: Provides time off for employees to grieve the loss of a loved one. 6. Professional Development Benefits: a. L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development. b. Mentorship Program: Offers guidance and support from experienced professionals. c. Job Training: Provides training to enhance job-related skills. d. Professional Certification Reimbursements: Assists employees in obtaining professional certifications. e. Promote from Within: Encourages internal growth and advancement opportunities.

Posted 1 day ago

Apply

7.0 years

7 - 8 Lacs

Cochin

On-site

Job Information We are seeking a highly skilled and motivated .NET Technical Lead to guide our engineering team in the development of scalable, secure, and high-performance web applications. You will play a pivotal role in architecture, design, development, and deployment using modern .NET technologies, while also mentoring developers and ensuring adherence to best practices. Key Responsibilities · Lead the design, development, and deployment of ASP.NET Core Web APIs and enterprise-grade web applications.· Architect scalable backend systems using .NET 6/7/8, C#, and Entity Framework Core or ADO.NET. Collaborate with cross-functional teams including front-end, DevOps, QA, and business stakeholders. Guide the team in implementing RESTful services, Microservices, and secure integrations. Lead code reviews, enforce coding standards, and drive performance optimization. Own DevOps pipelines, CI/CD automation, and deployment strategies. Manage cloud infrastructure and services on Azure or AWS (App Services, SQL, Blob Storage, Lambda, etc.). Oversee database design and optimization in MS SQL Server and NoSQL databases (e.g., MongoDB, Cosmos DB). Mentor junior and mid-level developers, support skill development, and conduct technical training. Work in Agile/Scrum methodology, actively participating in sprint planning, stand-ups, and retrospectives. Skills you Need Expertise in C#, ASP.NET MVC, ASP.NET Core, and Web API development. Strong understanding of Entity Framework Core or ADO.NET for data access. Solid experience with MS SQL Server, stored procedures, and database performance tuning. Experience working with NoSQL databases (MongoDB, Cosmos DB, DynamoDB preferred). Hands-on experience in Azure or AWS cloud platforms. Experience in DevOps practices including CI/CD, Git, Azure DevOps, GitHub Actions, or Jenkins. Strong understanding of software design patterns, SOLID principles, and architectural best practices. Exposure to containerization (Docker, Kubernetes) is a plus. Excellent communication, leadership, and project coordination skills. Experience 7 Years Work Location Kochi Work Type Full Time Please send your resume to careers@cabotsolutions.com

Posted 1 day ago

Apply

4.0 years

5 - 17 Lacs

Coimbatore

On-site

Job Profile:- Full Stack Engineer Experience:- 4+Years Location:- Coimbatore Overview: As a Full Stack Engineer at our organisation, you will play a key role in developing, and deploying scalable, high-performance applications . You’ll work in a predominantly AWS-focused environment , ensuring top-notch code quality and seamless deployment from ideation to production. Your expertise in React, React Native, Python, and AWS will drive business growth through user-centric innovation and robust system architecture. Technical Proficiencies: 4-6 years of experience in developing and deploying large-scale software solutions. Front-End: Strong expertise in React + Redux, JavaScript/TypeScript, React Native (3+ years). Back-End: Proficiency in Python for server-side logic and API development (3+ years). Cloud Technologies: Hands-on experience with AWS (Lambda, DynamoDB, Serverless Architecture) (2+ years). Database Management: Solid understanding of NoSQL databases (DynamoDB, MongoDB, etc.). Testing & Debugging: Strong experience in unit testing, integration testing, and performance optimization. RESTful APIs: Proven experience in designing, developing, and integrating APIs for scalable applications. Key Responsibilities: End-to-End Development: Design, implement, and deploy mobile and web applications with a focus on performance and scalability. Code Quality & Compliance: Write clean, efficient, and maintainable code , ensuring healthcare industry compliance where applicable. Collaboration & Coordination: Work closely with domain experts, designers, engineers, AWS specialists, and QA teams to develop seamless user experiences. Tech Stack: Utilize React, React Native, TypeScript, Python, DynamoDB , and other modern technologies to build robust applications. Cloud & API Development: Design, integrate, and maintain RESTful APIs while leveraging AWS Serverless (Lambda, DynamoDB) for scalable solutions. Cross-Time Zone Communication: Effectively coordinate with offshore teams via Slack, email, and other collaboration tools. Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹1,700,000.00 per year Benefits: Health insurance Provident Fund Experience: Full-stack development: 4 years (Preferred) Python: 3 years (Preferred) React Native: 2 years (Preferred) JavaScript: 2 years (Preferred) TypeScript: 2 years (Preferred) AWS: 2 years (Preferred) Location: Coimbatore, Tamil Nadu (Required) Work Location: In person

Posted 1 day ago

Apply

4.0 - 7.0 years

10 - 16 Lacs

Ahmedabad

On-site

MERN Lead Location: Ahmedabad Experience: 4 to 7 years At Brilworks, we are passionate about delivering innovative software solutions. We are looking for an experienced Lead MERN Stack Developer who not only has strong technical expertise but also excels in managing and mentoring a team of developers. If you are driven by challenges and enjoy fostering a collaborative and growth-oriented team environment, we want to hear from you! Team Leadership & Management: · Lead and mentor a team of 5 to 7 developers, ensuring they meet project objectives and personal growth goals. · Provide technical guidance, conduct code reviews, and ensure adherence to best practices. · Facilitate effective communication within the team and with stakeholders, including clients. · Oversee sprint planning, task delegation, and tracking progress in an Agile environment. · Promote a culture of accountability, ownership, and continuous learning within the team. Project Delivery: · Translate client requirements and Jira tickets into actionable tasks and deliverables. · Collaborate with clients and stakeholders to provide updates, clarify requirements, and ensure alignment with goals. · Ensure the team delivers high-quality, scalable, and maintainable code. Technical Responsibilities: · Architect and develop robust and scalable applications using the MERN stack (MongoDB, Express, React, Node.js). · Ensure responsive and pixel-perfect UI implementation from Figma designs. · Manage state effectively in React applications (e.g., Redux, React Query). · Build and maintain RESTful APIs and optionally work with GraphQL APIs. · Implement and enforce automated testing practices, including unit testing and end-to-end testing (e.g., Cypress). · Establish CI/CD pipelines for efficient deployment and testing processes. · Optimize applications for performance, scalability, and security. Must-Have: · At least 4 years of hands-on experience in MERN stack development. · Proficiency in React.js, including state management and component design. · Strong knowledge of Node.js and Express.js for backend development. · Experience with REST API development and integration. · Ability to convert Figma designs into responsive React components. · Expertise in writing unit tests to ensure code quality. · Solid understanding of Agile development methodologies. Good-to-Have: · Experience with GraphQL and building GraphQL APIs. · Knowledge of Next.js and Nest.js frameworks. · Familiarity with AWS services like S3, Cognito, DynamoDB, etc. · Experience with CI/CD pipeline setup and management. · Knowledge of Storybook for UI development and testing. · Proficiency in Cypress for end-to-end testing. Soft Skills: · Strong leadership and team management abilities. · Excellent problem-solving skills and the ability to make critical decisions under pressure. · Clear and concise communication, both written and verbal. · Ability to manage multiple priorities and meet deadlines in a fast-paced environment. Job Types: Full-time, Permanent Pay: ₹1,000,000.00 - ₹1,600,000.00 per year Benefits: Flexible schedule Health insurance Paid sick time Paid time off Provident Fund Experience: MERN: 4 years (Required) Location: Ahmedabad, Gujarat (Preferred) Work Location: In person

Posted 1 day ago

Apply

3.0 years

20 - 29 Lacs

Ahmedabad

Remote

Full Time Ahmedabad/GiftCity The Site Reliability Engineer (SRE) position is a software development-oriented role, focusing heavily on coding, automation, and ensuring the stability and reliability of our global platform. The ideal candidate will primarily be a skilled software developer capable of participating in on-call rotations. The SRE team develops sophisticated telemetry and automation tools, proactively monitoring platform health and executing automated corrective actions. As guardians of the production environment, the SRE team leverages advanced telemetry to anticipate and mitigate issues, ensuring continuous platform stability. What Will You Be Involved With? Develop and maintain advanced telemetry and automation tools for monitoring and managing global platform health. Actively participate in on-call rotations, swiftly diagnosing and resolving system issues and escalations from the customer support team (this is not a customer-facing role). Implement automated solutions for incident response, system optimization, and reliability improvement. Proactively identify potential system stability risks and implement preventive measures. What Will You Bring to the Table? Software Development: 3+ years of professional Python development experience. Strong grasp of Python object-oriented programming concepts and inheritance. Experience developing multi-threaded Python applications. 2+ years of experience using Terraform, with proficiency in creating modules and submodules from scratch. Proficiency or willingness to learn Golang. Operating Systems: Experience with Linux operating systems. Strong understanding of monitoring critical system health parameters. Cloud: 3+ years of hands-on experience with AWS services including EC2, Lambda, CloudWatch, EKS, ELB, RDS, DynamoDB, and SQS. AWS Associate-level certification or higher preferred. Networking: Basic understanding of network protocols: TCP/IP DNS HTTP Load balancing concepts Additional Qualifications (Preferred): Familiarity with trading systems and low-latency environments is advantageous but not required. What We Bring to the Table Compensation: ₹2,000,000 – ₹2,980,801 / year We offer a comprehensive benefits package designed to support your well-being, growth, and work-life balance. Health & Financial Security: Medical, Dental, and Vision coverage Group Life (GTL) and Group Income Protection (GIP) schemes Pension contributions Time Off & Flexibility: Enjoy the best of both worlds: the energy and collaboration of in-person work, combined with the convenience and focus of remote days. This is a hybrid position requiring three days of in-office collaboration per week, with the flexibility to work remotely for the remaining two days. Our hybrid model is designed to balance individual flexibility with the benefits of in-person collaboration, enhanced team cohesion, spontaneous innovation, hands-on mentorship opportunities and strengthens our company culture. 25 days of Paid Time Off (PTO) per year, with the option to roll over unused days. One dedicated day per year for volunteering. Two professional development days per year to allow uninterrupted professional development. An additional PTO day added during milestone anniversary years. Robust paid holiday schedule with early dismissal. Generous parental leave for all parents (including adoptive parents). Work-Life Support & Resources: Budget for tech accessories, including monitors, headphones, keyboards, and other office equipment. Milestone anniversary bonuses. Wellness & Lifestyle Perks: Subsidy contributions toward gym memberships and health/wellness initiatives (including discounted healthcare premiums, healthy meal delivery programs, or smoking cessation support). Our Culture: Forward-thinking, culture-based organization with collaborative teams that promote diversity and inclusion. Trading Technologies is a Software-as-a-Service (SaaS) technology platform provider to the global capital markets industry. The company’s award-winning TT® platform connects to the world’s major international exchanges and liquidity venues in listed derivatives alongside a growing number of asset classes, including fixed income and cryptocurrencies. The TT platform delivers advanced tools for trade execution and order management, market data solutions, analytics, trade surveillance, risk management, and infrastructure services to the world’s leading sell-side institutions, buy-side firms, and exchanges. The company’s blue-chip client base includes Tier 1 banks as well as brokers, money managers, hedge funds, proprietary traders, Commodity Trading Advisors (CTAs), commercial hedgers, and risk managers. These firms rely on the TT ecosystem to manage their end-to-end trading operations. In addition, exchanges utilize TT’s technology to deliver innovative solutions to their market participants. TT also strategically partners with technology companies to make their complementary offerings available to Trading Technologies’ global client base through the TT ecosystem. Trading Technologies (TT) is an equal-opportunity employer. Equal employment has been, and continues to be, a required practice at the Company. Trading Technologies’ practice of equal employment opportunity is to recruit, hire, train, promote, and base all employment decisions on ability rather than race, color, religion, national origin, sex/gender orientation, age, disability, sexual orientation, genetic information or any other protected status. Additionally, TT participates in the E-Verify Program for US offices.

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Matillion is The Data Productivity Cloud. We are on a mission to power the data productivity of our customers and the world, by helping teams get data business ready, faster. Our technology allows customers to load, transform, sync and orchestrate their data. We are looking for passionate, high-integrity individuals to help us scale up our growing business. Together, we can make a dent in the universe bigger than ourselves. With offices in the UK, US and Spain, we are now thrilled to announce the opening of our new office in Hyderabad, India. This marks an exciting milestone in our global expansion, and we are now looking for talented professionals to join us as part of our founding team. We are now looking for Software Engineers to join #Team Green About the Role Matillion is built around small development teams with responsibility for specific themes and initiatives. Each team is a mix of engineers with various levels of skills and experience. As a Software Engineer you will work within a team to write, test, and release new features and fix problems in the Matillion products, all while innovating on new ideas. Technologies Matillion uses… Java, React, Spring, GraphQL, Docker, Kubernetes, MongoDB, DynamoDB, Kafka, SQL, RESTful services, Cloud Technologies (AWS, GCP, Azure), Agile What you will be doing You’ll spend a significant amount of your time working on production services and applications for Matillion, whilst also collaborating with the broader team to understand and deliver work that contributes to the teams’ goals Responsible for your workflow, you’ll be writing code, unit testing, all the way through to completion and production release, then ongoing maintenance and support Whilst also participating in code reviews, you will be part of research projects, exploring future opportunities and new technologies You’ll have extensive opportunity to develop your technical and interpersonal skills through self-training, collaboration with the broader team, and mentoring, enabling progression through up-skilling to take on more complex tasks By developing an understanding of the teams domain and architecture, you’ll help handle risk, change and uncertainty, contributing to confident decision-making and continually improving ways of working What we are looking for Be proficient in coding in Java, with a good understanding of underpinning techniques of Object-oriented Programming, Programming concepts and best practices (e.g. style guidelines, testability, efficiency, observability, scalability, security) Experience implementing Java Spring microservices, using container technologies such as docker and with relational database technologies, such as Postgres, MySQL, Oracle or SQL Server Background in full software development life cycle from design to deployment via CI/CD tooling, using agile methodologies (e.g. Kanban, Scrum) Familiarity with cloud technologies, strong preference for AWS Ability to collaborate in a cross-functional team to solve business goals, whilst adapting to different types of technical challenges Matillion has fostered a culture that is collaborative, fast-paced, ambitious, and transparent, and an environment where people genuinely care about their colleagues and communities. Our 6 core values guide how we work together and with our customers and partners. We operate a truly flexible and hybrid working culture that promotes work-life balance, and are proud to be able to offer the following benefits: - Company Equity - 27 days paid time off - 12 days of Company Holiday - 5 days paid volunteering leave - Group Mediclaim (GMC) - Enhanced parental leave policies - MacBook Pro - Access to various tools to aid your career development More about Matillion Thousands of enterprises including Cisco, DocuSign, Slack, and TUI trust Matillion technology to load, transform, sync, and orchestrate their data for a wide range of use cases from insights and operational analytics, to data science, machine learning, and AI. With over $300M raised from top Silicon Valley investors, we are on a mission to power the data productivity of our customers and the world. We are passionate about doing things in a smart, considerate way. We’re honoured to be named a great place to work for several years running by multiple industry research firms. We are dual headquartered in Manchester, UK and Denver, Colorado. We are keen to hear from prospective Matillioners, so even if you don’t feel you match all the criteria please apply and a member of our Talent Acquisition team will be in touch. Alternatively, if you are interested in Matillion but don't see a suitable role, please email talent@matillion.com. Matillion is an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment for all of our team. Matillion prohibits discrimination and harassment of any type. Matillion does not discriminate on the basis of race, colour, religion, age, sex, national origin, disability status, genetics, sexual orientation, gender identity or expression, or any other characteristic protected by law.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Position Summary... What you'll do... About the Team: The Data and Customer Analytics Team is a strategic unit dedicated to transforming data into actionable insights that drive customer-centric decision-making across the organization. Our mission is to harness the power of data to understand customer behavior, optimize business performance, and enable personalized experiences. Our team is responsible for building and maintaining a centralized, scalable, and secure data platform that consolidates customer-related data from diverse sources across the organization. This team plays a foundational role in enabling data-driven decision-making, advanced analytics, and personalized customer experiences. This team plays a critical role in building trust with customers by implementing robust privacy practices, policies, and technologies that protect personal information throughout its lifecycle. What You’ll Do Design, build, test and deploy cutting edge solutions at scale, impacting multi-billion-dollar business. Work closely with product owner and technical lead and play a major role in the overall delivery of the assigned project/enhancements. Interact with Walmart engineering teams across geographies to leverage expertise and contribute to the tech community. Provide business insights while leveraging internal tools and systems, databases and industry data. Drive the success of the implementation by applying technical skills, to design and build enhanced processes and technical solutions in support of strategic initiatives. What You’ll Bring 6-9 year's experience in building highly scalable, high performance, responsive web applications. Experience building customizable, reusable, and dynamic API components using Java, NodeJS, Serverless API, RESTful API and Graph QL. Experience with web Java Spring boot API deployment for server-side development with design principles Understanding of RESTful APIs & GraphQL Experience in working in NoSQL databases like Cassandra , Mongo DB etc Strong Work experience in Google Cloud platform services Strong creative, collaboration, and communication skills Ability to multitask between several different requirements and features concurrently. Familiarity with CI/CD, unit testing, automated frontend testing Build high quality code by conducting unit testing and enhancing design to prevent re-occurrences of defects Ability to perform in a team environment. Strong expertise in Java, Spring Boot, Spring MVC, and Spring Cloud. Hands-on experience with Apache Kafka (topics, partitions, consumer groups, Kafka Streams). Solid understanding of microservices architecture and event-driven systems. Experience with RESTful APIs, OAuth, JWT, and API gateways. Proficiency in SQL (PostgreSQL, MySQL, Big Query, Big Lake GCP services) and NoSQL (MongoDB, Cassandra, DynamoDB). Knowledge of Docker, Kubernetes, and cloud platforms (Azure, AWS, or GCP). Strong debugging and performance optimization skills. About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer – By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions – while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelor's degree in computer science, information technology, engineering, information systems, cybersecurity, or related area and 3years’ experience in software engineering or related area at a technology, retail, or data-driven company. Option 2: 5 years’ experience in software engineering or related area at a technology, retail, or data-driven company. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Certification in Security+, GISF, CISSP, CCSP, or GSEC, Master’s degree in computer science, information technology, engineering, information systems, cybersecurity, or related area and 1 year’s experience leading information security or cybersecurity projects Information Technology - CISCO Certification - Certification Primary Location... BLOCK- 1, PRESTIGE TECH PACIFIC PARK, SY NO. 38/1, OUTER RING ROAD KADUBEESANAHALLI, , India R-2221423

Posted 1 day ago

Apply

4.5 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: Full Stack Engineer (Java + Angular) Experience: 4.5+ Years Location: Kochi Salary: Up to 15 LPA Employment Type: Full-time Key Responsibilities Design, develop, test, and maintain single-page web applications (SPAs) using Angular, TypeScript, Java, and modern web technologies. Collaborate with cross-functional teams (developers, QA, product owners, designers) to build cutting-edge solutions. Frontend: Angular, TypeScript, Sass, HTML, Node/npm Backend: Java, Spring MVC, REST APIs Databases: PostgreSQL, MongoDB Cloud/DevOps: AWS (Lambda, Aurora), Apache Tomcat Ensure high-quality code through unit testing (JUnit, Jasmine) and best practices. Debug, optimize, and troubleshoot applications to maintain "category killer" status. Configure and manage development environments (IDEs, build tools, CI/CD pipelines). Skills & Qualifications ✅ Must-Have: Bachelor’s degree in Computer Science/Software Engineering (or related field) with a minimum 3.0 GPA. 4.5+ years of hands-on experience in Java and Angular development. Strong expertise in JavaScript/TypeScript, HTML, CSS/Sass, REST APIs, and SQL/NoSQL databases. Experience with multi-layered software architectures and Agile methodologies. Knowledge of build tools (Maven, npm) and version control (Bitbucket). Problem-solving mindset with a dedication to clean, efficient code. ✅ Nice-to-Have (Advantageous Skills): Experience with AWS (Lambda, DynamoDB, Aurora). Familiarity with Jira, Concourse CI/CD, or testing frameworks (Jasmine, JUnit). Knowledge of NoSQL databases (MongoDB, Cassandra).

Posted 1 day ago

Apply

12.0 years

0 Lacs

India

On-site

We are seeking a highly skilled and experienced AWS Architect with a strong background in Data Engineering and expertise in Generative AI. In this pivotal role, you will be responsible for designing, building, and optimizing scalable, secure, and cost-effective data solutions that leverage the power of AWS services, with a particular focus on integrating and managing Generative AI capabilities. The ideal candidate will possess a deep understanding of data architecture principles, big data technologies, and the latest advancements in Generative AI, including Large Language Models (LLMs) and Retrieval Augmented Generation (RAG). You will work closely with data scientists, machine learning engineers, and business stakeholders to translate complex requirements into robust and innovative solutions on the AWS platform. Responsibilities: • Architect and Design: Lead the design and architecture of end-to-end data platforms and pipelines on AWS, incorporating best practices for scalability, reliability, security, and cost optimization. • Generative AI Integration: Architect and implement Generative AI solutions using AWS services like Amazon Bedrock, Amazon SageMaker, Amazon Q, and other relevant technologies. This includes designing RAG architectures, prompt engineering strategies, and fine-tuning models with proprietary data (knowledge base). • Data Engineering Expertise: Design, build, and optimize ETL/ELT processes for large-scale data ingestion, transformation, and storage using AWS services such as AWS Glue, Amazon S3, Amazon Redshift, Amazon Athena, Amazon EKS and Amazon EMR. • Data Analytics: Design, build, and optimize analytical solutions for large-scale data ingestion, analytics and insights using AWS services such as AWS Quicksight • Data Governance and Security: Implement robust data governance, data quality, and security measures, ensuring compliance with relevant regulations and industry best practices for both traditional data and Generative AI applications. • Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and Generative AI workloads, ensuring efficient resource utilization and optimal response times. • Technical Leadership: Act as a subject matter expert and provide technical guidance to data engineers, data scientists, and other team members. Mentor and educate on AWS data and Generative AI best practices. • Collaboration: Work closely with cross-functional teams, including product owners, data scientists, and business analysts, to understand requirements and deliver impactful solutions. • Innovation and Research: Stay up-to-date with the latest AWS services, data engineering trends, and advancements in Generative AI, evaluating and recommending new technologies to enhance our capabilities. • Documentation: Create comprehensive technical documentation, including architectural diagrams, design specifications, and operational procedures. • Cost Management: Monitor and optimize AWS infrastructure costs related to data and Generative AI workloads. Required Skills and Qualifications: • 12+ years of experience in data engineering, data warehousing, or big data architecture. • 5+ years of experience in an AWS Architect role, specifically with a focus on data. • Proven experience designing and implementing scalable data solutions on AWS. • Strong hands-on experience with core AWS data services, including: o Data Storage: Amazon S3, Amazon Redshift, Amazon DynamoDB, Amazon RDS o Data Processing: AWS Glue, Amazon EMR, Amazon EKS, AWS Lambda, Informatica o Data Analytic: Amazon Quicksight, Amazon Athena, Tableau o Data Streaming: Amazon Kinesis, AWS MSK o Data Lake: AWS Lake Formation • Strong competencies in Generative AI, including: o Experience with Large Language Models (LLMs) and Foundation Models (FMs). o Hands-on experience with Amazon Bedrock (including model customization, agents, and orchestrations). o Understanding and experience with Retrieval Augmented Generation (RAG) architectures and vector databases (e.g., Amazon OpenSearch Service for vector indexing). o Experience with prompt engineering and optimizing model responses. o Familiarity with Amazon SageMaker for building, training, and deploying custom ML/Generative AI models. o Knowledge of Amazon Q for business-specific Generative AI applications. • Proficiency in programming languages such as Python (essential), SQL, and potentially Scala or Java. • Experience with MLOps/GenAIOps principles and tools for deploying and managing Generative AI models in production. • Solid understanding of data modeling, data warehousing concepts, and data lake architectures. • Experience with CI/CD pipelines and DevOps practices on AWS. • Excellent communication, interpersonal, and presentation skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. • Strong problem-solving and analytical abilities. Preferred Qualifications: • AWS Certified Solutions Architect – Professional or AWS Certified Data Engineer – Associate/Specialty. • Experience with other Generative AI frameworks (e.g., LangChain) or open-source LLMs. • Familiarity with containerization technologies like Docker and Kubernetes (Amazon EKS). • Experience with data transformation tools like Informatica, Matillion • Experience with data visualization tools (e.g., Amazon QuickSight, Tableau, Power BI). • Knowledge of data governance tools like Amazon DataZone. • Experience in a highly regulated industry (e.g., Financial Services, Healthcare).

Posted 1 day ago

Apply

0 years

0 Lacs

Mysore, Karnataka, India

On-site

Company Description At VDID, we focus on building lasting partnerships through our innovative IT outsourcing model. We ensure equitable value sharing, true ownership, and a team genuinely invested in our clients' success. We cater to small and medium-sized enterprises, delivering cost-effective solutions with precision and excellence. Based on a commitment to fair wealth distribution and value creation, we are deeply invested in the success of our clients. Role Description This is a full-time on-site role for a Senior Full-Stack Developer, located in Mysore. The Senior Full-Stack Developer will be responsible for designing, developing, and maintaining web applications. Day-to-day tasks will include frontend and backend development, database management, code reviews, and collaborating with cross-functional teams to deliver high-quality software solutions. The developer will also ensure application performance and adherence to best coding practices. Qualifications: Immediate Joiner's (fulltime/contract) with 5-13yrs Experience, based out of Mysore or willing to relocate to Mysore, India. TypeScript & Python: Expert-level proficiency. React: Strong experience building complex user interfaces. AWS & Serverless: Deep understanding of serverless architecture, Lambda, API Gateway, and other core AWS services. Databases: Practical experience with both SQL (PostgreSQL) and NoSQL (MongoDB, DynamoDB) databases. Familiarity with DevOps practices and tools like Docker, Kubernetes, and CI/CD pipelines Excellent problem-solving skills and ability to debug complex issues Strong collaboration and communication skills Bachelor's degree in Computer Science, Information Technology, or a related field Experience working in an Agile development environment is a plus Resume to be sent to resume@vdidllc.com.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies