Jobs
Interviews

2969 Dynamodb Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Us Scatterlink is a startup building an end-to-end inventory intelligence platform for mining and heavy industries — combining RFID hardware, mobile apps, backend systems, and ERP integrations. We are a small but ambitious team delivering real-time visibility across industrial operations. From tracking parts surface and underground to syncing with enterprise systems, we’re rethinking how inventory and assets should be managed. Job Description We’re looking for a Senior Backend Engineer to help us build the backbone of our product. This role will focus on developing our AWS backend and creating a middleware layer that integrates our platform with ERP systems like SAP, Oracle JDE, and Microsoft Dynamics 365 F&O. Responsibilities Set up and maintain backend systems using AWS (Lambda, DynamoDB, API Gateway, etc.) Design and implement middleware for ERP integrations (SAP, Oracle JDE, D365 F&O, etc.) Develop secure, scalable APIs and event-driven services Handle bi-directional data sync between our platform and client ERP systems Collaborate with frontend, mobile, and analytics teams for smooth data flow Write clean, maintainable code and maintain strong documentation Troubleshoot, debug, and optimize performance and reliability Requirements 5-10 years of backend engineering experience Proficiency in TypeScript, JavaScript (Node.js, Express.js) Strong experience with AWS services (serverless preferred) Prior experience integrating with ERP systems (at least one of SAP, Oracle JDE, or D365 F&O) Solid understanding of APIs (REST/GraphQL), webhooks, OAuth2, and data sync patterns Excellent problem-solving skills and attention to detail Comfortable working in a fast-paced startup environment Must be based in Mumbai or willing to relocate – this is an on-site role Bonus Skills (Nice to Have) Experience with RFID/barcode systems Familiarity with mining, manufacturing, or logistics environments Exposure to offline-first mobile architecture What to Expect in the First 90 Days Days 0–30: Understand our stack and ERP workflows, participate in design sessions Days 30–60: Build initial middleware to fetch POs and master data from ERP systems Days 60–90: Finalize bidirectional sync flows and take ownership of ERP integration layer Benefits Competitive salary High ownership and visibility Be part of a core team building a new platform from the ground up Opportunity to work across hardware, software, and analytics in one company Job Type: Full-time Schedule: Day shift Monday to Friday (alternate Saturday)

Posted 2 weeks ago

Apply

13.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Introduction A career in IBM Software means you’ll be part of a team that transforms our customer’s challenges into solutions. Seeking new possibilities and always staying curious, we are a team dedicated to creating the world’s leading AI-powered, cloud-native software solutions for our customers. Our renowned legacy creates endless global opportunities for our IBMers, so the door is always open for those who want to grow their career. IBM’s product and technology landscape includes Research, Software, and Infrastructure. Entering this domain positions you at the heart of IBM, where growth and innovation thrive. Your Role And Responsibilities As a Software Development Manager, you’ll manage software development, enhance product experiences, and scale our team’s capabilities. You’ll manage careers, streamline hiring, collaborate with product, and drive innovation. We seek proactive professionals passionate about team growth, software architecture, coding, and process enhancements. Mastery of frameworks, deployment tech, and cloud APIs is essential as well as adaptability to innovative technologies. Your Primary Responsibilities Include Solution Development: Lead the development of innovative solutions to enhance our product and development experience, effectively contributing to making our software better. Team Growth and Management: Manage the career growth of team members, scale hiring and development processes, and foster a culture of continuous improvement within the team. Strategic Partnership: Partner with product teams to brainstorm ideas and collaborate on delivering an exceptional product, contributing to the overall success of the organization. Technical Direction: Provide technical guidance by actively participating in architectural discussions, developing code, and advocating for new process improvements to drive innovation and efficiency. Preferred Education Bachelor's Degree Required Technical And Professional Expertise Minimum 13+ years of experience in performance engineering, product development, and managing team. Expertise in public and private cloud infrastructure (AWS, Azure, OpenStack, OCI). Experience with Cloud Orchestration Platforms (Kubernetes, Docker, OpenShift, Tanzu). Deep understanding of IaC tools like Terraform, Ansible. Exposure to observability frameworks/tools (ELK Stack, Grafana, Prometheus, Splunk). Experience in performance testing tools like JMeter, LoadRunner, Gatling, k6, or similar. Expertise in monitoring and analysis tools like Grafana, Prometheus, AppDynamics, Dynatrace, New Relic, Splunk, or Datadog. Strong knowledge of distributed systems, cloud reference architectures, and various deployment patterns (network, compute, storage). Knowledge of Kafka, Redis, Elasticsearch, and other high-performance technologies. Hands-on experience with Core Cloud Services, microservices, containerization, and multi-region cloud solutions. Proficiency in DevOps tools (Jenkins, GitHub, Ansible, Puppet, Chef). Knowledge of NoSQL databases (Cassandra, MongoDB, Amazon DynamoDB, CouchDB, Redis). Experience with cloud messaging/streaming brokers like Kafka, RabbitMQ. Strong Linux system administration and ability to analyze workloads, debug, and tune performance at the system level. Preferred Technical And Professional Experience Develop and implement strategies to optimize system performance, ensuring scalability and reliability. Work closely with management, product owners, developers, and quality engineers to understand product requirements and business use cases. Design and execute performance, scalability, stress, reliability, availability, and longevitysimulations for SaaS and On-prem products.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... Sr. Software Engineer - AWS+Python+Pyspark Job Date: Jun 22, 2025 Job Requisition Id: 61668 Location: Bangalore, KA, IN Bangalore, KA, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire AWS Professionals in the following areas : AWS Data Engineer JD As Below: Primary skillsets :AWS services including Glue, Pyspark, SQL, Databricks, Python Secondary skillset- Any ETL Tool, Github, DevOPs(CI-CD) Experience: 3-4yrs Degree in computer science, engineering, or similar fields Mandatory Skill Set: Python, PySpark , SQL, AWS with Designing , developing, testing and supporting data pipelines and applications. 3+ years working experience in data integration and pipeline development. 3+ years of Experience with AWS Cloud on data integration with a mix of Apache Spark, Glue, Kafka, Kinesis, and Lambda in S3 Redshift, RDS, MongoDB/DynamoDB ecosystems Databricks, Redshift experience is a major plus. 3+ years of experience using SQL in related development of data warehouse projects/applications (Oracle & amp; SQL Server) Strong real-life experience in python development especially in PySpark in AWS Cloud environment Strong SQL and NoSQL databases like MySQL, Postgres, DynamoDB, Elasticsearch Workflow management tools like Airflow AWS cloud services: RDS, AWS Lambda, AWS Glue, AWS Athena, EMR (equivalent tools in the GCP stack will also suffice) Good to Have : Snowflake, Palantir Foundry At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved.

Posted 2 weeks ago

Apply

9.0 years

5 - 10 Lacs

Thiruvananthapuram

On-site

9 - 12 Years 1 Opening Trivandrum Role description Role Proficiency: Leverage expertise in a technology area (e.g. Infromatica Transformation Terradata data warehouse Hadoop Analytics) Responsible for Architecture for a small/mid-size projects. Outcomes: Implement either data extract and transformation a data warehouse (ETL Data Extracts Data Load Logic Mapping Work Flows stored procedures data warehouse) data analysis solution data reporting solutions or cloud data tools in any one of the cloud providers(AWS/AZURE/GCP) Understand business workflows and related data flows. Develop design for data acquisitions and data transformation or data modelling; applying business intelligence on data or design data fetching and dashboards Design information structure work-and dataflow navigation. Define backup recovery and security specifications Enforce and maintain naming standards and data dictionary for data models Provide or guide team to perform estimates Help team to develop proof of concepts (POC) and solution relevant to customer problems. Able to trouble shoot problems while developing POCs Architect/Big Data Speciality Certification in (AWS/AZURE/GCP/General for example Coursera or similar learning platform/Any ML) Measures of Outcomes: Percentage of billable time spent in a year for developing and implementing data transformation or data storage Number of best practices documented in any new tool and technology emerging in the market Number of associates trained on the data service practice Outputs Expected: Strategy & Planning: Create or contribute short-term tactical solutions to achieve long-term objectives and an overall data management roadmap Implement methods and procedures for tracking data quality completeness redundancy and improvement Ensure that data strategies and architectures meet regulatory compliance requirements Begin engaging external stakeholders including standards organizations regulatory bodies operators and scientific research communities or attend conferences with respect to data in cloud Operational Management : Help Architects to establish governance stewardship and frameworks for managing data across the organization Provide support in implementing the appropriate tools software applications and systems to support data technology goals Collaborate with project managers and business teams for all projects involving enterprise data Analyse data-related issues with systems integration compatibility and multi-platform integration Project Control and Review : Provide advice to teams facing complex technical issues in the course of project delivery Define and measure project and program specific architectural and technology quality metrics Knowledge Management & Capability Development : Publish and maintain a repository of solutions best practices and standards and other knowledge articles for data management Conduct and facilitate knowledge sharing and learning sessions across the team Gain industry standard certifications on technology or area of expertise Support technical skill building (including hiring and training) for the team based on inputs from project manager /RTE’s Mentor new members in the team in technical areas Gain and cultivate domain expertise to provide best and optimized solution to customer (delivery) Requirement gathering and Analysis: Work with customer business owners and other teams to collect analyze and understand the requirements including NFRs/define NFRs Analyze gaps/ trade-offs based on current system context and industry practices; clarify the requirements by working with the customer Define the systems and sub-systems that define the programs People Management: Set goals and manage performance of team engineers Provide career guidance to technical specialists and mentor them Alliance Management: Identify alliance partners based on the understanding of service offerings and client requirements In collaboration with Architect create a compelling business case around the offerings Conduct beta testing of the offerings and relevance to program Technology Consulting: In collaboration with Architects II and III analyze the application and technology landscapers process and tolls to arrive at the architecture options best fit for the client program Analyze Cost Vs Benefits of solution options Support Architects II and III to create a technology/ architecture roadmap for the client Define Architecture strategy for the program Innovation and Thought Leadership: Participate in internal and external forums (seminars paper presentation etc) Understand clients existing business at the program level and explore new avenues to save cost and bring process efficiency Identify business opportunities to create reusable components/accelerators and reuse existing components and best practices Project Management Support: Assist the PM/Scrum Master/Program Manager to identify technical risks and come-up with mitigation strategies Stakeholder Management: Monitor the concerns of internal stakeholders like Product Managers & RTE’s and external stakeholders like client architects on Architecture aspects. Follow through on commitments to achieve timely resolution of issues Conduct initiatives to meet client expectations Work to expand professional network in the client organization at team and program levels New Service Design: Identify potential opportunities for new service offerings based on customer voice/ partner inputs Conduct beta testing / POC as applicable Develop collaterals guides for GTM Skill Examples: Use data services knowledge creating POC to meet a business requirements; contextualize the solution to the industry under guidance of Architects Use technology knowledge to create Proof of Concept (POC) / (reusable) assets under the guidance of the specialist. Apply best practices in own area of work helping with performance troubleshooting and other complex troubleshooting. Define decide and defend the technology choices made review solution under guidance Use knowledge of technology t rends to provide inputs on potential areas of opportunity for UST Use independent knowledge of Design Patterns Tools and Principles to create high level design for the given requirements. Evaluate multiple design options and choose the appropriate options for best possible trade-offs. Conduct knowledge sessions to enhance team's design capabilities. Review the low and high level design created by Specialists for efficiency (consumption of hardware memory and memory leaks etc.) Use knowledge of Software Development Process Tools & Techniques to identify and assess incremental improvements for software development process methodology and tools. Take technical responsibility for all stages in the software development process. Conduct optimal coding with clear understanding of memory leakage and related impact. Implement global standards and guidelines relevant to programming and development come up with 'points of view' and new technological ideas Use knowledge of Project Management & Agile Tools and Techniques to support plan and manage medium size projects/programs as defined within UST; identifying risks and mitigation strategies Use knowledge of Project Metrics to understand relevance in project. Collect and collate project metrics and share with the relevant stakeholders Use knowledge of Estimation and Resource Planning to create estimate and plan resources for specific modules or small projects with detailed requirements or user stories in place Strong proficiencies in understanding data workflows and dataflow Attention to details High analytical capabilities Knowledge Examples: Data visualization Data migration RDMSs (relational database management systems SQL Hadoop technologies like MapReduce Hive and Pig. Programming languages especially Python and Java Operating systems like UNIX and MS Windows. Backup/archival software. Additional Comments: AI Architect Role Summary: Hands-on AI Architect with strong expertise in Deep Learning, Generative AI, and real-world AI/ML systems. The role involves leading the architecture, development, and deployment of AI agent-based solutions, supporting initiatives such as intelligent automation, anomaly detection, and GenAI-powered assistants across enterprise operations and engineering. This is a hands-on role ideal for someone who thrives in fast-paced environments, is passionate about AI innovations, and can adapt across multiple opportunities based on business priorities. Key Responsibilities: • Design and architect AI-based solutions including multi-agent GenAI systems using LLMs and RAG pipelines. • Build POCs, prototypes, and production-grade AI components for operations, support automation, and intelligent assistants. • Lead end-to-end development of AI agents for use cases such as triage, RCA automation, and predictive analytics. • Leverage GenAI (LLMs) and Time Series models to drive intelligent observability and performance management. • Work closely with product, engineering, and operations teams to align solutions with domain and customer needs. • Own model lifecycle from experimentation to deployment using modern MLOps and LLMOps practices. • Ensure scalable, secure, and cost-efficient implementation across AWS and Azure cloud environments. • Key Skills & Technology Areas: • AI/ML Expertise: 8+ years in AI/ML, with hands-on experience in deep learning, model deployment, and GenAI. • LLMs & Frameworks: GPT-3+, Claude, LLAMA3, LangChain, LangGraph, Transformers (BERT, T5), RAG pipelines, LLMOps. • Programming: Python (advanced), Keras, PyTorch, Pandas, FastAPI, Celery (for agent orchestration), Redis. • Modeling & Analytics: Time Series Forecasting, Predictive Modeling, Synthetic Data Generation. • Data & Storage: ChromaDB, Pinecone, FAISS, DynamoDB, PostgreSQL, Azure Synapse, Azure Data Factory. • Cloud & Tools: o AWS (Bedrock, SageMaker, Lambda), o Azure (Azure ML, Azure Databricks, Synapse), o GCP (Vertex AI – optional) • Observability Integration: Splunk, ELK Stack, Prometheus. • DevOps/MLOps: Docker, GitHub Actions, Kubernetes, CI/CD pipelines, model monitoring & versioning. • Architectural Patterns: Microservices, Event-Driven Architecture, Multi-Agent Systems, API-first Design. Other Requirements: • Proven ability to work independently and collaboratively in agile, innovation-driven teams. • Strong problem-solving mindset and product-oriented thinking. • Excellent communication and technical storytelling skills. • Flexibility to work across multiple opportunities based on business priorities. • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. ________________________________________ ________________________________________ ________________________________________ Skills python,pandas,AIML,GENAI About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Thiruvananthapuram

On-site

5 - 7 Years 2 Openings Trivandrum Role description Role Proficiency: This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes: Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures of Outcomes: Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected: Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation: Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration: Define and govern the configuration management plan. Ensure compliance within the team. Testing: Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance: Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management: Manage the delivery of modules effectively. Defect Management: Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation: Create and provide input for effort and size estimation for projects. Knowledge Management: Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management: Execute and monitor the release process to ensure smooth transitions. Design Contribution: Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface: Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management: Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications: Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples: Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples: Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments: Data Engineering Role Summary: Skilled Data Engineer with strong Python programming skills and experience in building scalable data pipelines across cloud environments. The candidate should have a good understanding of ML pipelines and basic exposure to GenAI solutioning. This role will support large-scale AI/ML and GenAI initiatives by ensuring high-quality, contextual, and real-time data availability. ________________________________________ Key Responsibilities: • Design, build, and maintain robust, scalable ETL/ELT data pipelines in AWS/Azure environments. • Develop and optimize data workflows using PySpark, SQL, and Airflow. • Work closely with AI/ML teams to support training pipelines and GenAI solution deployments. • Integrate data with vector databases like ChromaDB or Pinecone for RAG-based pipelines. • Collaborate with solution architects and GenAI leads to ensure reliable, real-time data availability for agentic AI and automation solutions. • Support data quality, validation, and profiling processes. ________________________________________ Key Skills & Technology Areas: • Programming & Data Processing: Python (4–6 years), PySpark, Pandas, NumPy • Data Engineering & Pipelines: Apache Airflow, AWS Glue, Azure Data Factory, Databricks • Cloud Platforms: AWS (S3, Lambda, Glue), Azure (ADF, Synapse), GCP (optional) • Databases: SQL/NoSQL, Postgres, DynamoDB, Vector databases (ChromaDB, Pinecone) – preferred • ML/GenAI Exposure (basic): Hands-on with Pandas, scikit-learn, knowledge of RAG pipelines and GenAI concepts • Data Modeling: Star/Snowflake schema, data normalization, dimensional modeling • Version Control & CI/CD: Git, Jenkins, or similar tools for pipeline deployment ________________________________________ Other Requirements: • Strong problem-solving and analytical skills • Flexible to work on fast-paced and cross-functional priorities • Experience collaborating with AI/ML or GenAI teams is a plus • Good communication and a collaborative, team-first mindset • Experience in Telecom, E- Commerce, or Enterprise IT Operations is a plus. Skills ETL,BIGDATA,PYSPARK,SQL About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 2 weeks ago

Apply

0 years

2 - 4 Lacs

Hyderābād

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Key Responsibilities Develop, deploy, and monitor machine learning models in production environments. Automate ML pipelines for model training, validation, and deployment. Optimize ML model performance, scalability, and cost efficiency. Implement CI/CD workflows for ML model versioning, testing, and deployment. Manage and optimize data processing workflows for structured and unstructured data. Design, build, and maintain scalable ML infrastructure on cloud platforms. Implement monitoring, logging, and alerting solutions for model performance tracking. Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into business applications. Ensure compliance with best practices for security, data privacy, and governance. Stay updated with the latest trends in MLOps, AI, and cloud technologies. Mandatory Skills Technical Skills: Programming Languages: Proficiency in Python (3.x) and SQL. ML Frameworks & Libraries: Extensive knowledge of ML frameworks (TensorFlow, PyTorch, Scikit-learn), data structures, data modeling, and software architecture. Databases: Experience with SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra, DynamoDB) databases. Mathematics & Algorithms: Strong understanding of mathematics, statistics, and algorithms for machine learning applications. ML Modules & REST API: Experience in developing and integrating ML modules with RESTful APIs. Version Control: Hands-on experience with Git and best practices for version control. Model Deployment & Monitoring: Experience in deploying and monitoring ML models using:MLflow (for model tracking, versioning, and deployment) WhyLabs (for model monitoring and data drift detection) Kubeflow (for orchestrating ML workflows) Airflow (for managing ML pipelines) Docker & Kubernetes (for containerization and orchestration) Prometheus & Grafana (for logging and real-time monitoring) Data Processing: Ability to process and transform unstructured data into meaningful insights (e.g., auto-tagging images, text-to-speech conversions). Preferred Cloud & Infrastructure Skills: Experience with cloud platforms : Knowledge of AWS Lambda, AWS API Gateway, AWS Glue, Athena, S3 and Iceberg and Azure AI Studio for model hosting, GPU/TPU usage, and scalable infrastructure. Hands-on with Infrastructure as Code (Terraform, CloudFormation) for cloud automation. Experience on CI/CD pipelines: Experience integrating ML models into continuous integration/continuous delivery workflows. We use Git based CI/CD methods mostly. Experience with feature stores (Feast, Tecton) for managing ML features. Knowledge of big data processing tools (Spark, Hadoop, Dask, Apache Beam). EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

India

On-site

Department Delivery Job posted on Jul 18, 2025 Employment type Permanent Hi Parveen , As discussed, please find the JD & Company details below: Job Title: AWS Q Developer Trainer Job Type: Contract or Full Time Travel Requirement: Comfortable with travel to client locations for training delivery (company will cover travel and accommodation expenses, if required) Certification: AWS Certification is Mandatory Job Description: We are seeking an experienced and AWS Certified Cloud Application Developer Trainer to deliver high-impact training sessions aligned with AWS certification standards. The ideal candidate will have a strong technical foundation in AWS development services and tools, combined with a passion for teaching and mentoring professionals. Key Responsibilities: Deliver Training: Facilitate hands-on training sessions for professionals preparing for AWS Cloud Application Developer certification. Content Development: Create, update, and maintain engaging training materials, including presentations, labs, exercises, and assessments. Learner Support: Provide post-training mentoring and resolve technical queries to ensure learner success. Client-Focused Delivery: Customize training programs based on specific client requirements and industry use cases. Performance Evaluation: Assess training outcomes through participant feedback and suggest improvements for future programs. Qualifications: Education: Bachelor’s degree in Computer Science, Information Technology, or a related field. Experience: 3–5 years of hands-on experience in AWS cloud application development and prior training or mentoring experience. Skills: Proficiency in AWS services such as Lambda, API Gateway, DynamoDB, S3, CloudFormation, and IAM. Strong understanding of cloud-native application development and serverless architecture. Excellent communication and presentation skills. Experience with CI/CD tools and DevOps practices in the AWS ecosystem. Preferred: Experience with AWS Developer Tools and SDKs. Familiarity with container services (ECS, EKS) and microservices architecture. Knowledge of Agile methodologies and cloud security best practices. About the Company: We are a global ed-tech leader with a presence in the US and India, focused on delivering cutting-edge learning programs in partnership with top academic institutions and global corporations. Our AI-powered digital learning platform integrates rigorous academics with industry expertise to drive impactful learning and exceptional learner engagement. Learn more about us at: www.talentsprint.com

Posted 2 weeks ago

Apply

15.0 years

2 - 6 Lacs

Hyderābād

On-site

DESCRIPTION At AWS, we are looking for a Delivery Practice Manager with a successful record of leading enterprise customers through a variety of transformative projects involving IT Strategy, distributed architecture, and hybrid cloud operations. AWS Global Services includes experts from across AWS who help our customers design, build, operate, and secure their cloud environments. Customers innovate with AWS Professional Services, upskill with AWS Training and Certification, optimize with AWS Support and Managed Services, and meet objectives with AWS Security Assurance Services. Our expertise and emerging technologies include AWS Partners, AWS Sovereign Cloud, AWS International Product, and the Generative AI Innovation Center. You’ll join a diverse team of technical experts in dozens of countries who help customers achieve more with the AWS cloud. Professional Services engage in a wide variety of projects for customers and partners, providing collective experience from across the AWS customer base and are obsessed about strong success for the Customer. Our team collaborates across the entire AWS organization to bring access to product and service teams, to get the right solution delivered and drive feature innovation based upon customer needs. 10034 Key job responsibilities - Engage customers - collaborate with enterprise sales managers to develop strong customer and partner relationships and build a growing business in a geographic territory, driving AWS adoption in key markets and accounts. - Drive infrastructure engagements - including short on-site projects proving the value of AWS services to support new distributed computing models. - Coach and teach - collaborate with AWS field sales, pre-sales, training and support teams to help partners and customers learn and use AWS services such as Amazon Databases – RDS/Aurora/DynamoDB/Redshift, Amazon Elastic Compute Cloud (EC2), Amazon Simple Storage Service (S3), AWS Identity and Access Management(IAM), etc. - Deliver value - lead high quality delivery of a variety of customized engagements with partners and enterprise customers in the commercial and public sectors. - Lead great people - attract top IT architecture talent to build high performing teams of consultants with superior technical depth, and customer relationship skills - Be a customer advocate - Work with AWS engineering teams to convey partner and enterprise customer feedback as input to AWS technology roadmaps Build organization assets – identify patterns and implement solutions that can be leveraged across customer base. Improve productivity through tooling and process improvements. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. BASIC QUALIFICATIONS Bachelor’s degree in Information Science / Information Technology, Computer Science, Engineering, Mathematics, Physics, or a related field. 15+ years of IT implementation and/or delivery experience, with 5+ years working in an IT Professional Services and/or consulting organization; and 5+ years of direct people management leading a team of consultants. Deep understanding of cloud computing, adoption strategy, transition challenges. Experience managing a consulting practice or teams responsible for KRAs. Ability to travel to client locations to deliver professional services as needed PREFERRED QUALIFICATIONS Demonstrated ability to think strategically about business, product, and technical challenges. Vertical industry sales and delivery experience of contemporary services and solutions.Experience with design of modern, scalable delivery models for technology consulting services. Business development experience including complex agreements w/ integrators and ISVs .International sales and delivery experience with global F500 enterprise customers and partners Direct people management experience leading a team of at least 20 or manager of manager experience in a consulting practice. Use of AWS services in distributed environments with Microsoft, IBM, Oracle, HP, SAP etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad IND, KA, Bangalore IND, MH, Maharashtra IND, HR, Gurugram Customer Service

Posted 2 weeks ago

Apply

5.0 years

2 - 4 Lacs

Hyderābād

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are the only professional services organization who has a separate business dedicated exclusively to the financial services marketplace. Join Digital Engineering Team and you will work with multi-disciplinary teams from around the world to deliver a global perspective. Aligned to key industry groups including Asset management, Banking and Capital Markets, Insurance and Private Equity, Health, Government, Power and Utilities, we provide integrated advisory, assurance, tax, and transaction services. Through diverse experiences, world-class learning and individually tailored coaching you will experience ongoing professional development. That’s how we develop outstanding leaders who team to deliver on our promises to all of our stakeholders, and in so doing, play a critical role in building a better working world for our people, for our clients and for our communities. Sound interesting? Well, this is just the beginning. Because whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. We’re seeking a versatile Full Stack Developer with hands-on experience in Python (including multithreading and popular libraries) ,GenAI and AWS cloud services. The ideal candidate should be proficient in backend development using NodeJS, ExpressJS, Python Flask/FastAPI, and RESTful API design. On the frontend, strong skills in Angular, ReactJS, TypeScript, etc.EY Digital Engineering is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional capability and product knowledge. The Digital Engineering (DE) practice works with clients to analyse, formulate, design, mobilize and drive digital transformation initiatives. We advise clients on their most pressing digital challenges and opportunities surround business strategy, customer, growth, profit optimization, innovation, technology strategy, and digital transformation. We also have a unique ability to help our clients translate strategy into actionable technical design, and transformation planning/mobilization. Through our unique combination of competencies and solutions, EY’s DE team helps our clients sustain competitive advantage and profitability by developing strategies to stay ahead of the rapid pace of change and disruption and supporting the execution of complex transformations. Your key responsibilities Application Development: Design and develop cloud-native applications and services using AWS services such as Lambda, API Gateway, ECS, EKS, and DynamoDB, Glue, Redshift, EMR. Deployment and Automation: Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, and CodeDeploy to automate application deployment and updates. Architecture Design: Collaborate with architects and other engineers to design scalable and secure application architectures on AWS. Performance Tuning: Monitor application performance and implement optimizations to enhance reliability, scalability, and efficiency. Security: Implement security best practices for AWS applications, including identity and access management (IAM), encryption, and secure coding practices. Container Services Management: Design and deploy containerized applications using AWS services such as Amazon ECS (Elastic Container Service), Amazon EKS (Elastic Kubernetes Service), and AWS Fargate. Configure and manage container orchestration, scaling, and deployment strategies. Optimize container performance and resource utilization by tuning settings and configurations. Application Observability: Implement and manage application observability tools such as AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack (Elasticsearch, Logstash, Kibana). Develop and configure monitoring, logging, and alerting systems to provide insights into application performance and health. Create dashboards and reports to visualize application metrics and logs for proactive monitoring and troubleshooting. Integration: Integrate AWS services with application components and external systems, ensuring smooth and efficient data flow. Troubleshooting: Diagnose and resolve issues related to application performance, availability, and reliability. Documentation: Create and maintain comprehensive documentation for application design, deployment processes, and configuration. Skills and attributes for success Required Skills: AWS Services: Proficiency in AWS services such as Lambda, API Gateway, ECS, EKS, DynamoDB, S3, and RDS, Glue, Redshift, EMR. Backend: Python (multithreading, Flask, FastAPI), NodeJS, ExpressJS, REST APIs Frontend: Angular, ReactJS, TypeScript Cloud Engineering : Development with AWS (Lambda, EC2, S3, API Gateway, DynamoDB), Docker, Git, etc. Proven experience in developing and deploying AI solutions with Python, JavaScript Strong background in machine learning, deep learning, and data modelling. Good to have: CI/CD pipelines, full-stack architecture, unit testing, API integration Security: Understanding of AWS security best practices, including IAM, KMS, and encryption. Observability Tools: Proficiency in using observability tools like AWS CloudWatch, AWS X-Ray, Prometheus, Grafana, and ELK Stack. Container Orchestration: Knowledge of container orchestration concepts and tools, including Kubernetes and Docker Swarm. Monitoring: Experience with monitoring and logging tools such as AWS CloudWatch, CloudTrail, or ELK Stack. Collaboration: Strong teamwork and communication skills with the ability to work effectively with cross-functional teams. Preferred Qualifications: Certifications: AWS Certified Solutions Architect – Associate or Professional, AWS Certified Developer – Associate, or similar certifications. Experience: At least 5 Years of experience in an application engineering role with a focus on AWS technologies. Agile Methodologies: Familiarity with Agile development practices and methodologies. Problem-Solving: Strong analytical skills with the ability to troubleshoot and resolve complex issues. Education: Degree: Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field, or equivalent practical experience What we offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning : You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 weeks ago

Apply

0 years

4 - 6 Lacs

Chennai

On-site

Extensive experience in analytics and large scale data processing across diverse data platforms and tools Manage data storage and transformation across AWS S3, DynamoDB, Postgres, Delta Tables with efficient schema design and partitioning. Develop scalable analytics solutions using Athena and automate workflows with proper monitoring and error handling Ensure data quality access control and compliance through robust validation, logging and governance practices Design and maintain data pipelines using Python, Spark, Delta Lake framework, AWS Step functions, Event Bridge, AppFlow and OAUTH Tech Stack S3, Postgres, DynamoDB, Tableau, Python, Spark About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

5 - 20 Lacs

Noida

On-site

Lead Assistant Manager EXL/LAM/1411628 Healthcare AnalyticsNoida Posted On 03 Jul 2025 End Date 17 Aug 2025 Required Experience 3 - 7 Years Basic Section Number Of Positions 3 Band B2 Band Name Lead Assistant Manager Cost Code D010360 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 500000.0000 - 2000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Healthcare Organization Healthcare Analytics LOB Healthcare D&A SBU Healthcare Analytics Country India City Noida Center Noida-SEZ BPO Solutions Skills Skill AWS SQL PYSPARK AWS GLUE LAMBDA AWS SERVICES ATHENA GIT Minimum Qualification B.TECH/B.E Certification No data available Job Description Job Title: Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Job Description: We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics. Responsibilities: 1. Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. 2. Collaborate with analysts to understand data requirements and ensure data availability and quality. 3. Write and optimize SQL queries for data extraction, transformation, and loading. 4. Utilize Git for version control, ensuring proper documentation and tracking of code changes. 5. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. 6. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. 7. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. 8. Automate workflows using AWS Cloud services like event bridge, step functions. 9. Monitor and optimize data processing workflows for performance and scalability. 10. Troubleshoot data-related issues and provide timely resolution. 11. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications: 1. Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. 2. Strong proficiency in PySpark and Python for data processing and analysis. 3. Proficiency in SQL for data manipulation and querying. 4. Experience with version control systems, preferably Git. 5. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. 6. Familiarity with Databricks and it’s concepts. 7. Excellent problem-solving skills and attention to detail. 8. Strong communication and collaboration skills to work effectively within a team. 9. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills: 1. Knowledge of data warehousing concepts and data modeling. 2. Familiarity with big data technologies like Hadoop and Spark. 3. AWS certifications related to data engineering. Workflow Workflow Type L&S-DA-Consulting

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Ahmedabad

On-site

Position Overview This role is responsible for defining and delivering ZURU’s next-generation data architecture—built for global scalability, real-time analytics, and AI enablement. You will lead the unification of fragmented data systems into a cohesive, cloud-native platform that supports advanced business intelligence and decision-making. Sitting at the intersection of data strategy, engineering, and commercial enablement, this role demands both deep technical acumen and strong cross-functional influence. You will drive the vision and implementation of robust data infrastructure, champion governance standards, and embed a culture of data excellence across the organisation. Position Impact In the first six months, the Head of Data Architecture will gain deep understanding of ZURU’s operating model, technology stack, and data fragmentation challenges. You’ll conduct a comprehensive review of current architecture, identifying performance gaps, security concerns, and integration challenges across systems like SAP, Odoo, POS, and marketing platforms. By month twelve, you’ll have delivered a fully aligned architecture roadmap—implementing cloud-native infrastructure, data governance standards, and scalable models and pipelines to support AI and analytics. You will have stood up a Centre of Excellence for Data, formalised global data team structures, and established yourself as a trusted partner to senior leadership. What are you Going to do? Lead Global Data Architecture: Own the design, evolution, and delivery of ZURU’s enterprise data architecture across cloud and hybrid environments. Consolidate Core Systems: Unify data sources across SAP, Odoo, POS, IoT, and media into a single analytical platform optimised for business value. Build Scalable Infrastructure: Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, and Snowflake. Implement Governance Frameworks: Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage. Enable Metadata & Cataloguing: Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics. Operationalise AI/ML Pipelines: Lead data architecture that supports AI/ML initiatives, including demand forecasting, pricing models, and personalisation. Partner Across Functions: Translate business needs into data architecture solutions by collaborating with leaders in Marketing, Finance, Supply Chain, R&D, and Technology. Optimize Cloud Cost & Performance: Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms. Establish Data Leadership: Build and mentor a high-performing data team across India and NZ, and drive alignment across engineering, analytics, and governance. Vendor and Tool Strategy: Evaluate external tools and partners to ensure the data ecosystem is future-ready, scalable, and cost-effective. What are we Looking for? 8+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases Hands-on expertise with ingestion, transformation, and orchestration pipelines (e.g. Kafka, Airflow, DBT, Fivetran) Strong knowledge of ERP data models, especially SAP and Odoo Experience with data governance, compliance (GDPR/CCPA) , metadata cataloguing, and security practices Familiarity with distributed systems and streaming frameworks like Spark or Flink Strong stakeholder management and communication skills, with the ability to influence both technical and business teams Experience building and leading cross-regional data teams Tools & Technologies Cloud Platforms: AWS (S3, EMR, Kinesis, Glue), Azure (Synapse, ADLS), GCP Big Data: Hadoop, Apache Spark, Apache Flink Streaming: Kafka, Kinesis, Pub/Sub Orchestration: Airflow, Prefect, Dagster, DBT Warehousing: Snowflake, Redshift, BigQuery, Databricks Delta NoSQL: Cassandra, DynamoDB, HBase, Redis Query Engines: Presto/Trino, Athena IaC & CI/CD: Terraform, GitLab Actions Monitoring: Prometheus, Grafana, ELK, OpenTelemetry Security/Governance: IAM, TLS, KMS, Amundsen, DataHub, Collibra, DBT for lineage What do we Offer? Competitive compensation ️ 5 Working Days with Flexible Working Hours Medical Insurance for self & family Training & skill development programs Work with the Global team, Make the most of the diverse knowledge Several discussions over Multiple Pizza Parties A lot more! Come and discover us!

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Description: We are currently seeking a highly motivated software engineer who combines solid technical credentials for the position of Lead Software Engineer within our Platform Engineering team. In this role, you will collaborate with technology peers and business partners to build and deploy the foundation for the next generation of modern cloud native and SaaS software applications and services for Thomson Reuters. About The Role In this opportunity as a Software Engineer, you will: Development of high-quality code/script on the below bullet points Working with Python programming language and XSLT transformation AWS services like Lambda, Step Functions, CloudWatch, CloudFormation, S3, DynamoDB, PostgreSQL, Glue etc. Hands-on on custom Template creation, local stack deployments Known to GitHub Copilot functionalities to be used on job for quicker turnaround Good to have working knowledge on Groovy, JavaScript and/or Angular 6+ Work with XML content Write Lambdas for AWS Step functions Adhere to best practices for development in Python, Groovy, JavaScript, and Angular Come up with Functional Unit Test cases for the requirements in Python, Groovy, JavaScript, and Angular Actively participate in Code review of own and the peers Work with different AWS capabilities Understand Integration points of upstream and downstream processes Learn new frameworks that are needed for implementation Maintain and update the Agile/Scrum dashboard for accurate tracking of own tasks Proactively pick up tasks and work toward the completion of them with aggressive timelines Understand the existing functionality of the systems and suggest how we can improve About you: You’re a fit for the role of Software Engineer if you: Strong in Python development & React Js 4+ years of experience in relevant technologies Proficient in Python programming Experience with XSLT transformation Skilled in AWS services (Lambda, Step Functions, S3, etc.) Familiar with Oracle/SQL and Unix/Linux Hands-on with custom template creation and local stack deployments Familiar with GitHub Copilot for efficiency Strong understanding of cloud concepts What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 2 weeks ago

Apply

32.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Sun Life Global Solutions (SLGS) With 32 years of operations in the Philippines and 17 years in India, Sun Life Global Solutions, (formerly Asia Service Centres), a microcosm of Sun Life, is poised to harness the regions’ potential in a significant way - from India and the Philippines to the world. We are architecting and executing a BOLDER vision: being a Digital and Innovation Hub, shaping the Business, driving Transformation and superior Client experience by providing expert Technology, Business and Knowledge Services and advanced Solutions. We help our clients achieve lifetime financial security and live healthier lives – our core purpose and mission. Drawing on our collaborative and inclusive culture, we are reckoned as a ‘Great Place to Work’, ‘Top 100 Best Places to Work for Women’ and stand among the ‘Top 11 Global Business Services Companies’ across India and the Philippines. The technology function at Sun Life Global Solutions is geared towards growing our existing business, deepening our client understanding, managing new age technology systems, and demonstrating thought leadership. We are committed to building greater domain expertise and engineering ability, delivering end to end solutions for our clients, and taking a lead in intelligent automation. Tech services at Sun Life Global Solutions have evolved in areas such as application development and management, Support, Testing, Digital, Data Engineering and Analytics, Infrastructure Services and Project Management. We are constantly expanding our strength in Information technology and are looking for fresh talents who can bring ideas and values aligning with our Digital strategy. Role & responsibilities Design and implement complex cloud-based solutions using AWS services (S3 bucket, Lambda, Bedrock, etc) Design and optimize database schemas and queries, particularly with DynamoDB OR any database Write, test, and maintain high-quality Java, API, Python code for cloud-based applications Collaborate with cross-functional teams to identify and implement cloud-based solutions Ensure security, compliance, and best practices in cloud infrastructure Troubleshoot and resolve complex technical issues in cloud environments Mentor junior engineers and contribute to the team's technical growth Stay up-to-date with the latest cloud technologies and industry trends Preferred candidate profile Bachelor's degree in Computer Science, Engineering, or a related field 5-10 years of experience in cloud engineering, with a strong focus on AWS Extensive experience with Java, AWS, API, Python programming and software development Strong knowledge of database systems, particularly DynamoDB or any database Hands On experience in AWS services (S3 bucket, Lambda, Bedrock etc) Excellent problem-solving and analytical skills Strong communication and collaboration abilities

Posted 2 weeks ago

Apply

32.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Sun Life Global Solutions (SLGS) With 32 years of operations in the Philippines and 17 years in India, Sun Life Global Solutions, (formerly Asia Service Centres), a microcosm of Sun Life, is poised to harness the regions’ potential in a significant way - from India and the Philippines to the world. We are architecting and executing a BOLDER vision: being a Digital and Innovation Hub, shaping the Business, driving Transformation and superior Client experience by providing expert Technology, Business and Knowledge Services and advanced Solutions. We help our clients achieve lifetime financial security and live healthier lives – our core purpose and mission. Drawing on our collaborative and inclusive culture, we are reckoned as a ‘Great Place to Work’, ‘Top 100 Best Places to Work for Women’ and stand among the ‘Top 11 Global Business Services Companies’ across India and the Philippines. The technology function at Sun Life Global Solutions is geared towards growing our existing business, deepening our client understanding, managing new age technology systems, and demonstrating thought leadership. We are committed to building greater domain expertise and engineering ability, delivering end to end solutions for our clients, and taking a lead in intelligent automation. Tech services at Sun Life Global Solutions have evolved in areas such as application development and management, Support, Testing, Digital, Data Engineering and Analytics, Infrastructure Services and Project Management. We are constantly expanding our strength in Information technology and are looking for fresh talents who can bring ideas and values aligning with our Digital strategy. Role & responsibilities Design and implement complex cloud-based solutions using AWS services (S3 bucket, Lambda, Bedrock, etc) Design and optimize database schemas and queries, particularly with DynamoDB OR any database Write, test, and maintain high-quality Java, API, Python code for cloud-based applications Collaborate with cross-functional teams to identify and implement cloud-based solutions Ensure security, compliance, and best practices in cloud infrastructure Troubleshoot and resolve complex technical issues in cloud environments Mentor junior engineers and contribute to the team's technical growth Stay up-to-date with the latest cloud technologies and industry trends Preferred candidate profile Bachelor's degree in Computer Science, Engineering, or a related field 5-10 years of experience in cloud engineering, with a strong focus on AWS Extensive experience with Java, AWS, API, Python programming and software development Strong knowledge of database systems, particularly DynamoDB or any database Hands On experience in AWS services (S3 bucket, Lambda, Bedrock etc) Excellent problem-solving and analytical skills Strong communication and collaboration abilities

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title:-AWS Infrastructure Location : pune,Mumbai, Chennai, Bangalore Experience : 5+Years Job Type : Contract to hire . Notice Period :- Immediate joiners. Detailed JD: Lead efforts to troubleshoot and resolve AWS Infrastructure and operational issues ensuring minimal downtime and optimal performance. Architect and deploy scalable secure and efficient solutions on AWS that align with business objectives. Provide hands-on support for migrating Azure and on-premises system to AWS ensuring smooth transitions and minimizing disruptions. Monitor assess and enhance the performance of AWS environments using tools like CloudWatch AWS Trusted Advisor and Cost Explorer. Automate AWS infrastructure provisioning and management using CloudFormation and Terraform. Monitor and optimize cloud costs and implement best practices for security using AWS IAM KMS Guard Duty and other security tools. Collaborate with development DevOps and operational teams to ensure seamless integration of AWS services and support day to day operations. Create and maintain technical documentation and ensure that the operational team follows AWS best practices. Qualifications: 1. 6 years of experience in AWS cloud architecture and operations 2. Expertise in AWS Services such as EC2 Lambda S3 RDS DynamoDB VPC Route53 and more 3. Proven experiences in migrating on-premises and Azure cloud to AWS using tools 4. Strong understanding of AWS networking including VPCs VPNs and Direct Connect 5. AWS Certified Solutions Architect Professional and AWS DevOps certifications preferred

Posted 2 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About The Job We are an innovative global healthcare company; driven by one purpose we chase the miracles of science to improve people’s lives. Our team, across some 100 countries, is dedicated to transforming the practice of medicine by working to turn the impossible into the possible. We provide potentially life-changing treatment options and life-saving vaccine protection to millions of people globally, while putting sustainability and social responsibility at the center of our ambitions. Sanofi has recently embarked into a vast and ambitious digital transformation program. A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions, to accelerate R&D, manufacturing and commercial performance and bring better drugs and vaccines to patients faster, to improve health and save lives. Join our Application Center of Excellence (COE) team as the Technical Engineering Lead and take a pivotal role in centralizing and advancing engineering capabilities across Digital R&D. In this role, you will lead, manage, and mentor a high-performing Agile engineering team, driving innovation and operational excellence in software development. What You Will Be Doing Your role is critical in building innovative solutions that impact lives globally, whether by enhancing existing services or launching new ones. You’ll also collaborate closely with cross-functional teams to troubleshoot issues, define product requirements, and design solutions that align with Sanofi’s mission. Join us as we harness technology to redefine healthcare innovation and make a meaningful impact worldwide. Provide Technical Leadership: Guide software engineering teams with technical and leadership expertise, fostering effective collaboration and high productivity. Leverage Modern Advancements: Implement cutting-edge technologies, including GenAI, to enhance software development efficiency and innovation. Architect Scalable Solutions: Design and develop high-performance, scalable applications using microservices architecture, with a focus on observability and reliability. Demonstrate Deep Expertise: Showcase technical mastery in modern internet architectures, frameworks, and best practices to drive engineering excellence. Drive Continuous Improvement: Lead initiatives to enhance processes and outcomes across cross-platform teams, creating an Agile, adaptive environment. Promote Learning Culture: Embrace and encourage a fast-learning mindset, advocating for continuous professional growth within the team. Champion Agile Principles: Advocate for Agile practices, ensuring their effective adoption and maturity across teams. About You You bring a minimum of 10 years of experience managing software engineering teams, with a proven track record of leading groups of 15+ engineers. Demonstrated success in delivering complex projects, mentoring team members effectively, and fostering a culture of collaboration and innovation. Extensive experience driving integration initiatives across diverse systems and ensuring seamless interoperability at scale. Technical Skills Expertise in software architecture, microservices development, and scalable application design. Proficiency in designing and implementing system integrations using APIs, middleware, and messaging systems, with strong knowledge of integration tools and patterns such as RESTful APIs, GraphQL, and event-driven architectures. Strong coding skills in languages such as Python, Java, or Scala, as well as SQL. Deep understanding of cloud databases (e.g., Snowflake) and data management solutions, including AWS RDS, DynamoDB, and S3, focusing on scalability, reliability, and performance optimization. Proven ability to design, deploy, and manage secure, reliable integrations with cloud-based platforms and services, ensuring seamless data flow and system scalability. Nice to have experience with advanced GenAI technologies, such as AWS Q and ChatGPT, alongside key AWS components like Lambda, SNS, and more, to deliver robust, cloud-native solutions. Soft Skills Excellent communication and collaboration skills, with the ability to work across multidisciplinary teams to deliver end-to-end solutions. A passion for continuous learning, staying ahead of technology trends, and promoting adaptability within the team. Education: A degree in Computer Science, Software Engineering, or a related field is required. Advanced degrees or certifications are a plus but not mandatory if your experience and skills align with the role. Languages: Proficiency in English is essential (other languages a plus) Why choose us? Bring the miracles of science to life alongside a supportive, future-focused team. Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally. Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact. Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs and at least 14 weeks’ gender-neutral parental leave. Opportunity to work in an international environment, collaborating with diverse business teams and vendors, working in a dynamic team, and fully empowered to propose and implement innovative ideas. Pursue Progress . Discover Extraordinary . Progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. You can be one of those people. Chasing change, embracing new ideas and exploring all the opportunities we have to offer. Let’s pursue progress. And let’s discover extraordinary together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, or gender identity. Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com! null Pursue Progress . Discover Extraordinary . Join Sanofi and step into a new era of science - where your growth can be just as transformative as the work we do. We invest in you to reach further, think faster, and do what’s never-been-done-before. You’ll help push boundaries, challenge convention, and build smarter solutions that reach the communities we serve. Ready to chase the miracles of science and improve people’s lives? Let’s Pursue Progress and Discover Extraordinary – together. At Sanofi, we provide equal opportunities to all regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, protected veteran status or other characteristics protected by law.

Posted 2 weeks ago

Apply

3.0 years

0 Lacs

Chandigarh, India

On-site

Full Stack The Role We are looking for a skilled and motivated Full Stack Developer with a strong background in Node.js, Python, JavaScript, SQL, and AWS. The ideal candidate will have experience in designing and building scalable, maintainable, and high-performance applications, with a strong grasp of OOP concepts, serverless architecture, and Responsibilities : Design, develop, and maintain full-stack applications using Node.js, Python, and JavaScript. Develop RESTful APIs and integrate with microservices-based architecture. Work with SQL databases (e.g., PostgreSQL, MySQL) to design and optimize data models. Implement solutions on AWS cloud, leveraging services such as Lambda, API Gateway, DynamoDB, RDS, and more. Architect and build serverless and cloud-native applications. Follow object-oriented programming (OOP) principles and design patterns for clean, maintainable code. Collaborate with cross-functional teams including Product, DevOps, and QA. Participate in code reviews, testing, and deployment processes. Ensure security, performance, and scalability of : 3+ years of experience in full-stack development. Strong proficiency in Node.js, Python, and JavaScript. Experience with SQL databases and writing complex queries. Solid understanding of AWS services, especially in serverless architecture. Deep knowledge of OOP principles and software design patterns. Familiarity with microservices architecture and distributed systems. Experience with CI/CD pipelines and version control (Git). Strong problem-solving and debugging skills. (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

We are looking for the very best engineers to join our Indian office/subsidiary as we start a brand-new R&D center in NCR India. There isn't a better time to be part of the global WatchGuard engineering team which has dominated the Network and End Point Security products and services market through innovation, technical skills, subject matter expertise, and a customer-first mindset. You will be in startup mode with the support of a mature global organization, aiming to build the next generation of security services for the AWS cloud and develop user-friendly solutions for the WatchGuard Cloud. Your focus, experience, and technical expertise are needed to champion our mission in NCR and collaborate across various functions within WatchGuard. Your knowledge of agile software development practices and experience in creating cloud-native applications and services using cutting-edge tools in an AWS environment will be essential for success in this role. As a dynamic, motivated, driven, and smart individual, you will have the opportunity to be part of a fast-growing cybersecurity company. If you are intrigued by this challenge and want to create something great, keep reading. Most days at WatchGuard are fast-paced and challenging, requiring your ability to thrive in an ever-changing environment. Your daily priorities may include discussions with your team on development activities, strategizing improvements, deep-diving into new areas, and ensuring effective monitoring of production. Your role is crucial in helping WatchGuard build a world-class cloud development team and deliver cutting-edge solutions with enterprise-grade security. Key Responsibilities: - Design and build fault-tolerant and failsafe Platform and Services for WatchGuard Cloud applications with SLAs reaching up to 99.999. - Foster an environment of collaboration, transparency, innovation, and fun. - Collaborate with globally distributed teams to deliver on priorities and commitments. - Enhance the cloud delivery model (CI/CD) and improve the security model. - Focus on making development more efficient, prioritized, and enjoyable. Your Experience Should Include: - Analytical mindset, ownership, and resilience in the face of failure. - Experience in building cloud services using languages such as Python, GOLang, Java, C++, Scala, C#, and frontend technologies like Angular, React. - Familiarity with databases like MySQL, Elasticsearch, DynamoDB, MSSQL, and messaging queues. - Proficiency in AWS services and understanding of Scrum/Agile & DevOps processes. - Comfort with tools/systems like Jira, GitHub, Confluence, Jenkins, etc. Who You Are: - Not necessarily an expert in security, but with disciplined engineering practices and a collaborative mindset. - A believer in the endless possibilities of being part of a growing global company in the cybersecurity domain. - A problem-solver with out-of-the-box thinking and a sense of urgency. - Highly motivated, passionate about cloud environments and new technologies, with strong communication skills. - Open to diverse ideas and rational thinking to arrive at the right solutions.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

haryana

On-site

As a Senior Software Engineer with 6+ years of experience, you will be responsible for full-stack development, including developing and maintaining high-quality web applications across the full stack, encompassing both client-side and server-side logic. Your expertise in front-end development will be crucial as you leverage your skills in client-side frameworks like React or Angular to build intuitive, responsive, and visually appealing user interfaces. You should have a strong understanding of object-oriented JavaScript and TypeScript, along with excellent HTML/CSS skills to make data both functional and visually appealing. In terms of back-end development, hands-on experience and a solid understanding of back-end development using .Net / C#, Java, or Kotlin is essential. Architectural and design contributions are also expected from you, where you will apply best practices in software design to ensure scalability, resilience, and maintainability of applications. Active participation in agile methodologies, including agile ceremonies like sprints, stand-ups, and retrospectives, is required. Familiarity with analytics, A/B testing, feature flags, Continuous Delivery, and Trunk-based Development will be beneficial. Your role will also involve maintaining code quality and optimization by writing clean, efficient, and well-documented code. You should proactively identify and address performance bottlenecks and ensure code quality through reviews. Problem-solving skills are crucial as you tackle complex technical challenges effectively. Proficiency in competitive programming/data structures & algorithms is expected, demonstrated by hands-on LeetCode experience. Strong communication and coordination skills are necessary to effectively collaborate with cross-functional teams, stakeholders, and product managers. A passion for new technologies and continuous exploration of the best tools and practices available is highly valued. Qualifications required for this role include a B.S. in Computer Science or a quantitative field; M.S. is preferred. Additionally, 6+ years of hands-on software development experience is essential. Having experience in system architecture design, knowledge of NoSQL technologies, hands-on experience with message queuing systems, and familiarity with containerization & orchestration will be great assets for this role. Desired skills and experience include proficiency in front-end technologies like React or Angular, JavaScript, TypeScript, and HTML/CSS, as well as back-end technologies such as .Net / C#, Java, Kotlin. Knowledge of databases like RDBMS and NoSQL (Cassandra, Scylla DB, Elasticsearch, Redis, DynamoDB), and messaging systems like Kafka, RabbitMQ, SQS, Azure Service Bus will be advantageous.,

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary We are looking for a Senior Tech Lead Java to drive the architecture, design, and development of scalable, high-performance applications. The ideal candidate will have expertise in Java, Spring Boot, Microservices, and AWS and be capable of leading a team of engineers in building enterprise-grade solutions. Key Responsibilities Lead the design and development of complex, scalable, and high-performance Java applications. Architect and implement Microservices-based solutions using Spring Boot. Optimize and enhance existing applications for performance, scalability, and reliability. Provide technical leadership, mentoring, and guidance to the development team. Work closely with cross-functional teams, including Product Management, DevOps, and QA, to deliver high-quality software. Ensure best practices in coding, testing, security, and deployment. Design and implement cloud-native applications using AWS services such as EC2, Lambda, S3, RDS, API Gateway, and Kubernetes. Troubleshoot and resolve technical issues and system bottlenecks. Stay up to date with the latest technologies and drive innovation within the team. Required Skills & Qualifications 8+ years of experience in Java development. Strong expertise in Spring Boot, Spring Cloud, and Microservices architecture. Hands-on experience with RESTful APIs, event-driven architecture, and messaging systems (Kafka, RabbitMQ, etc.). Deep understanding of database technologies such as MySQL, PostgreSQL, or NoSQL (MongoDB, DynamoDB, etc.). Experience with CI/CD pipelines and DevOps tools (Jenkins, Docker, Kubernetes, Terraform, etc.). Proficiency in AWS cloud services and infrastructure. Strong knowledge of security best practices, performance tuning, and monitoring. Excellent problem-solving skills and ability to work in an Agile environment. Strong communication and leadership skills (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

At Goldman Sachs, our Engineers play a crucial role in making things possible by connecting people and capital with ideas. You have the opportunity to change the world by solving challenging engineering problems, building massively scalable software and systems, architecting low latency infrastructure solutions, proactively guarding against cyber threats, and leveraging machine learning alongside financial engineering to turn data into action continuously. In our dynamic environment, innovative strategic thinking and immediate, real solutions are essential to push the limit of digital possibilities and explore a world of opportunity at the speed of markets. We are looking for Engineers who are innovators and problem-solvers, specializing in risk management, big data, and more. We seek creative collaborators who can evolve, adapt to change, and thrive in a fast-paced global environment. As part of the Transaction Banking team within Platform Solutions at Goldman Sachs, you will work towards providing comprehensive cash management solutions for corporations. By combining the strength and heritage of a 155-year-old financial institution with the agility and entrepreneurial spirit of a tech start-up, we aim to offer the best client experience through modern technologies centered on data and analytics. Joining the Digital Engineering team means being responsible for creating a unified digital experience for clients interacting with Transaction Banking products across various interfaces. The team's mission is to build a cutting-edge digital interface that meets corporate clients" needs, focusing on scalability, resilience, and 24x7 availability of a cloud-based platform. As a team member, you will have the opportunity to evolve through the entire software life-cycle, closely collaborate with stakeholders, and contribute to shaping a world-class engineering culture within the team. In this role, you will be involved in the development, testing, rollout, and support of new client-facing features, working alongside product owners and stakeholders. By participating in a global team, you will help shape and implement the strategic vision of a consumer-grade, industry-leading Digital Experience, integrating business value and client experience within the team. To be successful in this position, you should possess a BS degree in Computer Science or a related technical field involving programming or systems engineering, along with a minimum of 3 years of relevant professional experience using JavaScript/TypeScript. Experience with building modern UI products using React, working with high availability systems, and collaborating across different teams are crucial. Preferred qualifications include experience with Node, Webpack, MobX, Redux, Microservice architectures, REST API, SQL databases, and familiarity with Agile operating models. If you have lots of energy, excellent communication skills, enjoy engineering challenges, and have a passion for delivering high-quality technology products in a rapidly changing environment, we would like to hear from you.,

Posted 2 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Title: Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Job Description We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics. Responsibilities Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. Collaborate with analysts to understand data requirements and ensure data availability and quality. Write and optimize SQL queries for data extraction, transformation, and loading. Utilize Git for version control, ensuring proper documentation and tracking of code changes. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. Automate workflows using AWS Cloud services like event bridge, step functions. Monitor and optimize data processing workflows for performance and scalability. Troubleshoot data-related issues and provide timely resolution. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. Strong proficiency in PySpark and Python for data processing and analysis. Proficiency in SQL for data manipulation and querying. Experience with version control systems, preferably Git. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. Familiarity with Databricks and it’s concepts. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively within a team. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills Knowledge of data warehousing concepts and data modeling. Familiarity with big data technologies like Hadoop and Spark. AWS certifications related to data engineering.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

You are a dynamic and highly motivated Team Lead - MEAN with immediate availability to join Flexsin Technologies. In this role, you will lead and oversee projects of varying sizes across multiple domains, showcasing your strong technical knowledge, exceptional project management skills, and ability to thrive in a consulting environment. Your responsibilities will include providing technical leadership by demonstrating proficiency in programming languages like Java, React.Js, and Node.js. You will offer guidance to project teams, ensuring the successful execution of technical tasks. Additionally, you will develop comprehensive project plans, manage project timelines and resources, and align project goals with client expectations and company objectives. Utilizing your diverse project experience, you will adapt to various domains, collaborate with cross-functional teams, and drive project success in different business areas. As a Team Lead, you will provide clear direction, motivation, and support to team members, ensuring project goals are met within specified timelines and budgets. You will implement quality control processes, maintain accurate project documentation, and provide regular updates to senior management and stakeholders. Your qualifications should include a Bachelor's degree in a relevant field, with a Master's degree preferred, along with a minimum of 8 years of total working experience and 3-5 years of project management experience. Strong technical background in backend languages like Node.js, experience in a consulting environment, and proficiency in source code management tools and databases are essential. Prior team management experience in an Agile/Scrum environment, knowledge of Node.js, Express.js, and Microservices, and experience with UI libraries like Angular and React are desired skills. Key personal attributes for this role include strong leadership and teamwork skills, a proactive approach to identifying opportunities, exceptional problem-solving abilities, adaptability to different domains and industries, and a curious and adaptable mindset. Proficiency in English and Microsoft Excel and PowerPoint is necessary to effectively communicate project updates, manage stakeholder expectations, and address concerns efficiently.,

Posted 2 weeks ago

Apply

90.0 years

0 Lacs

Pune, Maharashtra, India

On-site

At Allstate, great things happen when our people work together to protect families and their belongings from life’s uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers’ evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. Job Description The Software Engineer Lead Consultant architects and designs their digital products using modern tools, technologies, frameworks, and systems. They apply a systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software. They own and manage running their application in production, and ultimately becomes accountable for the success of their digital products through achieving KPIs. Job Title: Senior Software Engineer About Arity And Our Ad Platform Team Arity, a technology company founded by Allstate, is transforming transportation by leveraging one of the largest driving behavior databases globally. Arity’s ad platform team plays a key role in the programmatic advertising ecosystem, specifically via Arity PMP (Private Marketplace), which offers brands a unique way to reach highly targeted audiences based on driving behaviors and predictive analytics. Our team uses advanced telematics data to help insurers, advertisers, and transportation companies optimize strategies while enhancing customer experiences and reducing operational costs. Job Description We are seeking a highly skilled Senior Software Engineer with 8 years of experience in software development, particularly in the .NET stack, React and AWS. The ideal candidate will have hands-on experience building and scaling microservices in a high-traffic environment. They will work closely with a high-performing team, contributing to the design, development, and deployment of our cutting-edge ad platform while expanding their knowledge of modern technologies like React, Go, and telematics-based programmatic advertising. Key Responsibilities Collaborate with Architects, Engineers, and Business stakeholders to understand technical and business requirements and deliver scalable solutions. Design, develop, and maintain microservices using C#, Go, React and AWS services like Lambda, S3, and RDS. Participate in code reviews, design discussions, and team retrospectives to foster a collaborative and high-performance engineering culture. Build and enhance CI/CD pipelines to ensure reliable and secure deployments. Implement performance monitoring and optimization practices to ensure the reliability of high-transaction systems. Expand technical expertise in modern stacks, including React and Go. Experience & Qualifications 4-8 years of professional experience in Microsoft .NET and C# development. Proficiency in building and maintaining cloud-native applications, preferably on AWS. Experience designing, developing, and deploying microservices in a high-traffic or real-time environment. Experience in frontend technologies like React, CSS, HTML, JavaScript. Familiarity with database technologies such as Redis, DynamoDB, RedShift is a plus. Strong problem-solving skills, with experience working in agile, cross-functional teams. Exposure to ad-tech or telematics is a plus, with a keen interest in programmatic advertising. Why Join Us? Be part of a team that is transforming how businesses leverage driving behavior data for smarter advertising. Work in a collaborative, innovative, and growth-oriented environment that values learning and technical excellence. Opportunities to work on advanced cloud-native architectures and cutting-edge technologies like React, Go, and big data tools. Primary Skills Customer Centricity, Digital Literacy, Inclusive Leadership, Learning Agility, Results-Oriented Shift Time Recruiter Info Yateesh B G ybgaa@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporation's Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organization’s business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies