Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job ID 2025-14313 Date posted 30/06/2025 Location Bengaluru, India Category IT Job Overview We are looking for a forward-thinking Service Reliability Analyst – Network Infrastructure to enhance our enterprise network systems' stability, performance, and observability. This role combines traditional NOC responsibilities with modern AI Ops practices and operates within a 24/7 shift-based team. The ideal candidate will proactively detect and resolve network anomalies, utilise AI/ML insights to optimise operations, and support continuous service availability and performance improvement. Responsibilities Monitor and manage enterprise network infrastructure using AI Ops platforms, including LAN, WAN, VPN, SD-WAN, data centres, and cloud networks. Leverage AI Ops tools (e.g. Dynatrace, LogicMonitor, etc.) to detect anomalies, correlate events, and reduce alert fatigue. Implement via automation maintenance and routine upgrade of enterprise grade network infrastructure Support root cause analysis (RCA) using AI-generated insights and contribute to incident postmortems. Maintain dashboards, reports, and documentation for operational visibility and performance KPIs (latency, availability, MTTR, etc.). Continuously tune AI/ML models and integrate new data sources to improve detection accuracy and incident correlation. Participate in change management reviews to assess risk to network service reliability. Support automation initiatives and contribute to developing intelligent incident response playbooks. Work on a shift pattern, on a 24/7/365 operating model, while being able to work independently and flexibly in response to emergencies or critical issues Required Skills And Experience 1 to 3 years experience. Bachelor’s degree in Information Technology, Computer Science, Network Engineering, or equivalent experience. Hands-on experience with network technologies and protocols (TCP/IP, BGP, OSPF, DNS, DHCP, SDWAN). Practical knowledge of ServiceNow ITSM. Experience with telemetry and observability tools such as LogicMonitor. Basic analytical skills with a data-driven approach to identifying and resolving network issues. Willingness to learn new skills and technologies as the SRC increases its scope of responsibility. Effective communicator within a team with a proactive approach and personal accountability for outcomes. Ability to analyze incident patterns and metrics to proactively recommend reliability improvements. Certifications such as Cisco CCNA/CCNP, CompTIA Network+, or equivalent. In addition, the Cisco DevNet Certification would be highly advantageous. Experience with public cloud networking (AWS, Azure, GCP). Familiarity with ITIL and SRE principles (SLI/SLOs, error budgets, incident command). Experience integrating AI Ops tools with ITSM systems (e.g., ServiceNow, Jira Service Management). Exposure to automation/orchestration tools (Ansible and Terraform). “Nice To Have” Skills And Experience Exposure to high performance computing or cloud-native services. Experience creating or updating Ansible playbooks for repetitive tasks or configuration. Curiosity about automation and DevOps practices. Accommodations at Arm At Arm, we want to build extraordinary teams. If you need an adjustment or an accommodation during the recruitment process, please email Hybrid Working at Arm Arm’s approach to hybrid working is designed to create a working environment that supports both high performance and personal wellbeing. We believe in bringing people together face to face to enable us to work at pace, whilst recognizing the value of flexibility. Within that framework, we empower groups/teams to determine their own hybrid working patterns, depending on the work and the team’s needs. Details of what this means for each role will be shared upon application. In some cases, the flexibility we can offer is limited by local legal, regulatory, tax, or other considerations, and where this is the case, we will collaborate with you to find the best solution. Please talk to us to find out more about what this could look like for you. Equal Opportunities at Arm Arm is an equal opportunity employer, committed to providing an environment of mutual respect where equal opportunities are available to all applicants and colleagues. We are a diverse organization of dedicated and innovative individuals, and don’t discriminate on the basis of race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Posted 2 weeks ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Skor: Are you ready to transform how Indonesians access credit? At Skor, we’re building the future of credit cards and credit scoring—combining technology, design, and data to empower millions with better financial solutions. Role Overview: As a Senior Backend Engineer at Skor, you’ll play a key role in building robust, scalable backend systems that power our credit products. You’ll work closely with cross-functional teams to ship high-quality features, solve complex engineering problems, and contribute to our core platform as we scale across Indonesia. This is an opportunity to grow fast, take ownership, and work alongside a team that values speed, impact, and technical excellence. Key Responsibilities: Design, develop, and maintain scalable APIs and backend systems Collaborate with product, design, and other engineers to ship high-impact features Contribute to system architecture decisions and backend tech strategy Write clean, efficient, and testable code with a strong focus on performance and reliability Monitor, debug, and optimize services in a cloud-based environment (e.g., AWS) Participate in code reviews and share knowledge with the team Work in an Agile environment with short release cycles and continuous delivery You’ll Be a Good Fit If: Have 3–4 years of experience in backend development, preferably at high-growth startups or product companies Are skilled in modern backend technologies (e.g., Node.js, Go, Java, or Python) and cloud platforms like AWS Understand system design, microservices, and API architecture Have experience working with relational databases (PostgreSQL, MySQL) and/or message queues (e.g., Kafka, RabbitMQ) Enjoy solving performance challenges (latency, memory, cost optimization) Are a strong communicator, proactive team player, and fast learner Care about code quality, testing, and maintainability Ideal Qualifications: Bachelor’s or Master’s in Computer Science from top-tier institutes (IITs, NITs, BITs or any equivalent top-tier institution). Experience in fast-paced, high-growth product companies or start-ups. Passionate about making an impact and solving complex problems with empathy for end users. At Skor, you'll have the opportunity to influence the future of the company and tackle exciting challenges in a dynamic, distributed environment.
Posted 2 weeks ago
0 years
0 Lacs
Delhi, India
On-site
About The Role As a Data Engineer in the Edge of Technology Center, you will play a critical role in designing and implementing scalable data infrastructure to power advanced analytics, AI/ML, and business intelligence. This position demands a hands-on technologist who can architect reliable pipelines, manage real-time event streams, and ensure smooth data operations across cloud-native environments. You will work closely with cross functional teams to enable data-driven decision- making and innovation across the organization. Key Responsibilities Design, implement, and maintain robust ETL/ELT pipelines using tools like Argo Workflows or Apache Airflow. Manage and execute database schema changes with Alembic or Liquibase, ensuring data consistency. Configure and optimize distributed query engines like Trino and AWS Athena for analytics. Deploy and manage containerized workloads on AWS EKS or GCP GKE using Docker, Helmfile, and Argo CD. Build data lakes/warehouses on AWS S3 and implement performant storage using Apache Iceberg. Use Terraform and other IaC tools to automate cloud infrastructure provisioning securely. Develop CI/CD pipelines with GitHub Actions to support rapid and reliable deployments. Architect and maintain Kafka-based real-time event-driven systems using Apicurio and AVRO. Collaborate with product, analytics, and engineering teams to define and deliver data solutions. Monitor and troubleshoot data systems for performance and reliability issues using observability tools (e.g., Prometheus, Grafana). Document data flows and maintain technical documentation to support scalability and knowledge sharing. Key Deliverables Fully operational ETL/ELT pipelines supporting high-volume, low-latency data processing. Zero-downtime schema migrations with consistent performance across environments. Distributed query engines tuned for large-scale analytics with minimal response time. Reliable containerized deployments in Kubernetes using GitOps methodologies. Kafka-based real-time data ingestion pipelines with consistent schema validation. Infrastructure deployed and maintained as code using Terraform and version control. Automated CI/CD processes ensuring fast, high-quality code releases. Cross-functional project delivery aligned with business requirements. Well-maintained monitoring dashboards and alerting for proactive issue resolution. Internal documentation and runbooks for operational continuity and scalability. Qualifications Bachelor’s or master’s degree in computer science, Data Science, Engineering, or a related field from a recognized institution. Technical Skills Orchestration Tools: Argo Workflows, Apache Airflow Database Migration: Alembic, Liquibase SQL Engines: Trino, AWS Athena Containers & Orchestration: Docker, AWS EKS, GCP GKE Data Storage: AWS S3, Apache Iceberg Relational Databases: Postgres, MySQL, Aurora Infrastructure Automation: Terraform (or equivalent IaC tools) CI/CD: GitHub Actions or similar GitOps Tools: Argo CD, Helmfile Event Streaming: Kafka, Apicurio, AVRO Languages: Python, Bash Monitoring: Prometheus, Grafana (preferred) Soft Skills Strong analytical and problem-solving capabilities in complex technical environments. Excellent written and verbal communication skills to interact with both technical and non- technical stakeholders. Self-motivated, detail-oriented, and proactive in identifying improvement opportunities. Team player with a collaborative approach and eagerness to mentor junior team members. High adaptability to new technologies and dynamic business needs. Effective project management and time prioritization. Strong documentation skills for maintaining system clarity. Ability to translate business problems into data solutions efficiently. Benefits Competitive salary and benefits package in a globally operating company. Opportunities for professional growth and involvement in diverse projects. Dynamic and collaborative work environment Why You'll Love Working With Us Encardio offers a thriving environment where innovation and collaboration are essential. You'll be part of a diverse team shaping the future of infrastructure globally. Your work will directly contribute to some of the world's most ambitious and ground-breaking engineering projects. Encardio is an equal-opportunity employer committed to diversity and inclusion. How To Apply Please submit your CV and cover letter outlining your suitability for the role at humanresources@encardio.com
Posted 2 weeks ago
8.0 years
0 Lacs
Delhi, India
On-site
About the Role We are looking for a seasoned Engineering Manager to lead the development of our internal Risk, Fraud and Operations Platform . This platform plays a critical role in ensuring smooth business operations, detecting anomalies, managing fraud workflows, and supporting internal teams with real-time visibility and control. As an Engineering Manager, you’llbe responsible for leading a cross-functional team of backend engineers working on high-throughput systems, real-time data pipelines, and internal tools that power operational intelligence and risk management. You will own delivery, architecture decisions, team growth, and collaboration with stakeholders. Key Responsibilities Lead and grow a team of software engineers building internal risk and ops platforms. Oversee the design and development of scalable microservices and real-time data pipelines. Collaborate with stakeholders from Risk, Ops, and Product to define technical roadmaps and translate them into delivery plans. Ensure high system reliability, data accuracy, and low-latency access to risk signals and ops dashboards. Drive architectural decisions, code quality, testing, and deployment best practices. Contribute to hands-on design, reviews, and occasional coding when required. Optimize performance and cost-efficiency of services deployed on AWS. Mentor team members and foster a culture of ownership, innovation, and continuous learning. Tech Stack You'll Work With Languages: Node.js, Python, Java Data & Messaging: Kafka, OpenSearch, MongoDB, MySQL , Apache Spark , Apache Flink , Apache Druid Architecture: Microservices, REST APIs Infrastructure: AWS (EC2, ECS/EKS, Lambda, RDS, CI/CDetc.) Requirements 8+ years of software engineering experience with backend and distributed systems. 2+ years of people management or tech leadership experience. Strong experience with Node.js and Python ; familiarity with Java is a plus. Hands-on experience with event-driven architecture using Kafka or similar. Exposure to OpenSearch , MongoDB , and relational databases like MySQL . Exposure to Spark, Flink , Data pipeline ETL Deep understanding of cloud-native architecture and services on AWS . Proven ability to manage timelines, deliver features, and drive cross-functional execution. Strong communication and stakeholder management skills. Preferred Qualifications Prior experience in risk, fraud detection, operations tooling , or internal platforms . Experience with observability, alerting, and anomaly detection systems. Comfortable working in fast-paced environments with rapidly evolving requirements.
Posted 2 weeks ago
0 years
0 Lacs
India
On-site
Mission: Ship fast, reliable AI (LLM + retrieval) features that users pay for. No vanity research. What You’ll Do Prototype & ship LLM features (chat, summarization, transform) weekly. Design & version prompts, add guardrails, run A/B + regression tests. Build lean RAG loops (ingest → chunk → embed → vector store → answer). Add evaluation harness (quality, hallucination, latency, token cost). Implement & secure lightweight FastAPI/Node endpoints (auth, rate limits, logging). Monitor latency, cost/user, error rates; add semantic + response caching. Redact PII; handle secrets & access keys safely. Write short decision notes & maintain prompt registry. Job Type: Contractual / Temporary Contract length: 4 months Pay: ₹11,796.23 - ₹88,885.64 per month Work Location: In person
Posted 2 weeks ago
8.0 years
6 - 8 Lacs
Hyderābād
On-site
General Information Locations : Hyderabad, Telangana, India Role ID 209940 Worker Type Regular Employee Studio/Department CT - IT Work Model Hybrid Description & Requirements Electronic Arts creates next-level entertainment experiences that inspire players and fans around the world. Here, everyone is part of the story. Part of a community that connects across the globe. A place where creativity thrives, new perspectives are invited, and ideas matter. A team where everyone makes play happen. As Software Engineer, you will work as a Lead Full stack UI Developer and develop scalable Web applications for millions of players worldwide. You will apply the latest UI and Backend technologies to implement modern, sleek applications. You would also work with Scrum master and Product managers and partners to deliver products in the area of Player experience. What you'll do: You'll partner with Product managers and architects to develop scalable and efficient solutions to improve fan care and push fan growth You'll implement high-volume, low-latency UI applications using React, NextJS, Tailwind, or Bootstrap on typescript You'll build Frontend design and integrations with backend services using NodeJS You'll work on cloud native serverless solutions to achieve product capabilities. You'll lead the end-to-end deliverables of a product line You'll be responsible for code quality and efficiency including unit tests. You'll have to collaborate with the best designers, engineers of different technical backgrounds, and architects. You'll report to Engineering Manager. What we are looking for: Bachelor's degree in Computer science engineering or equivalent with overall 8+ years of experience as a Lead Full stack UI engineer (MERN stack preferable) Overall 8+ years of experience working in front end technologies like NextJS, React or Angular along with advanced CSS technologies like Tailwind, Bootstrap with unit tests using to ensure production ready code with minimalistic bugs 5+ years of JavaScript programming experience with knowledge of advanced JavaScript concepts like compilation, webpack, bundling,TypeScript, SCSS Must have knowledge of Design patterns, scalable architectures, GIT, and Coding standards. 2+ years experience working on cloud services like AWS Good experience with SQL and NoSQL Databases and its query languages. Understanding of Containerization concepts, CICD and go to market for web applications. Experience with Agile methodologies to iterate quickly on product changes, develop user stories and work through backlogs. About Electronic Arts We’re proud to have an extensive portfolio of games and experiences, locations around the world, and opportunities across EA. We value adaptability, resilience, creativity, and curiosity. From leadership that brings out your potential, to creating space for learning and experimenting, we empower you to do great work and pursue opportunities for growth. We adopt a holistic approach to our benefits programs, emphasizing physical, emotional, financial, career, and community wellness to support a balanced life. Our packages are tailored to meet local needs and may include healthcare coverage, mental well-being support, retirement savings, paid time off, family leaves, complimentary games, and more. We nurture environments where our teams can always bring their best to what they do. Electronic Arts is an equal opportunity employer. All employment decisions are made without regard to race, color, national origin, ancestry, sex, gender, gender identity or expression, sexual orientation, age, genetic information, religion, disability, medical condition, pregnancy, marital status, family status, veteran status, or any other characteristic protected by law. We will also consider employment qualified applicants with criminal records in accordance with applicable law. EA also makes workplace accommodations for qualified individuals with disabilities as required by applicable law.
Posted 2 weeks ago
0 years
5 - 8 Lacs
Hyderābād
On-site
Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Assistant Vice President– Generative AI – Systems Architect Role Overview: We are looking for an experienced Systems Architect with extensive experience in designing and scaling Generative AI systems to production. This role requires an individual with deep expertise in system architecture, software engineering, data platforms, and AI infrastructure, who can bridge the gap between data science, engineering and business. You will be responsible for end-to-end architecture of Gen.AI systems including model lifecycle management, inference, orchestration, pipelines Key Responsibilities: Architect and design end-to-end systems for production-grade Generative AI applications (e.g., LLM-based chatbots, copilots, content generation tools). Define and oversee system architecture covering data ingestion, model training/fine-tuning, inferencing, and deployment pipelines. Establish architectural tenets like modularity, scalability, reliability, observability, and maintainability. Collaborate with data scientists, ML engineers, platform engineers, and product managers to align architecture with business and AI goals. Choose and integrate foundation models (open source or proprietary) using APIs, model hubs, or fine-tuned versions. Evaluate and design solutions based on architecture patterns such as Retrieval-Augmented Generation (RAG), Agentic AI, Multi-modal AI, and Federated Learning. Design secure and compliant architecture for enterprise settings, including data governance, auditability, and access control. Lead system design reviews and define non-functional requirements (NFRs), including latency, availability, throughput, and cost. Work closely with MLOps teams to define the CI /CD processes for model and system updates. Contribute to the creation of reference architectures, design templates, and reusable components. Stay abreast of the latest advancements in GenAI , system design patterns, and AI platform tooling. Qualifications we seek in you! Minimum Qualifications Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Deep understanding of Generative AI architectures, including LLMs, diffusion models, prompt engineering, and model fine-tuning. Strong experience with at least one cloud platform (AWS, GCP, or Azure) and services like SageMaker, Vertex AI, or Azure ML. Experience with Agentic AI systems or orchestrating multiple LLM agents. Experience with multimodal systems (e.g., combining image, text, video, and speech models). Knowledge of semantic search, vector databases, and retrieval techniques in RAG. Familiarity with Zero Trust architecture and advanced enterprise security practices. Experience in building developer platforms/toolkits for AI consumption. Contributions to open-source AI system frameworks or thought leadership in GenAI architecture. Hands-on experience with tools and frameworks like LangChain , Hugging Face, Ray, Kubeflow, MLflow , or Weaviate /FAISS. Knowledge of data pipelines, ETL/ELT, and data lakes/warehouses (e.g., Snowflake, BigQuery , Delta Lake). Solid grasp of DevOps and MLOps principles, including containerization (Docker), orchestration (Kubernetes), CI/CD pipelines, and model monitoring. Familiarity with system design tradeoffs in latency vs cost vs scale for GenAI workloads. Preferred Qualifications: Bachelor’s or Master’s degree in computer science, Engineering, or related field. Experience in software/system architecture, with experience in GenAI /AI/ML. Proven experience designing and implementing distributed systems, cloud-native architectures, and microservices. Strong interpersonal and communication skills; ability to collaborate and present to technical and executive stakeholders. Certifications in cloud platforms (e.g., AWS Certified Solutions Architect, Microsoft Certified: Azure Solutions Architect Expert, Google Cloud Professional Data Engineer). Familiarity with data governance and security best practices. Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Assistant Vice President Primary Location India-Hyderabad Schedule Full-time Education Level Master's / Equivalent Job Posting Jul 21, 2025, 2:57:18 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time
Posted 2 weeks ago
1.0 years
2 - 3 Lacs
Hyderābād
On-site
Job Opening for Child Psychologist Job Location - Gachibowli, Hyderabad, Telangana Contact - 9311809772 / kyadav@momsbelief.com Fresher or Experienced all can apply Job Highlights : Role & Responsibilities - To perform time to time parent counselling and grievance redressal. - Recognize clients that can benefit from ABA and counsel the parents for enrolment. - Plan and conduct ABA assessment either on your own or with support of a supervisor. - Take daily cold probe data for the client. - Make assessment reports and IEP for the client either on your own or with support of a supervisor. - Manage negative behaviour of the learner and maintain instructional control. Ensure no person or property is damaged by the learner. - Report any problem behaviours of special concern to supervisors and plan and run a behaviour intervention plan. - Collect data such as frequency, duration, latency to track progress of the child. - Manage materials required for therapy. Coordinate promptly with supervisors and team if and when new material is needed. - Coordinate with Special Educators, ST and OT from time to time so there's no overlap or clash in therapy goals. - Ensure skills when achieved by the learner are generalised and kept in maintenance. Contact supervisors to add new goals in the plan. - Manage group sessions if they are planned at the centre. Other Skills - Must have good understanding of psychology, especially reward and principle. - Is well versed and confident in using Google Sheets and Docs, even on the phone to ensure timely data correction. - Should be quick in their responses to ensure instructional control in the learners and keep negative behaviours in check. - Knows and is not ashamed of singing and dancing as part of therapy. - Is physically fit enough to engage in physical play with learners. - Is creative to make best use of available resources to use in therapy. - Is open and inviting to the learner yet strict to maintain instructional control. - Is sufficiently loud especially while praising the learner. Can instil over enthusiasm. - Can address regular queries from parents. Can differentiate which issues described by parents or school need immediate attention. - Knows how to ignore tantrums when required. Gives little to no reaction when being hit, spit on, or laughed at by the learner. - Is up to date with their knowledge in the field of ABA and is willing to learn more. Job Types: Full-time, Permanent, Fresher Pay: ₹20,000.00 - ₹25,000.00 per month Benefits: Health insurance Provident Fund Application Question(s): How soon can you join this job ? Education: Master's (Preferred) Experience: Child Psychologist : 1 year (Preferred) Language: Telugu (Preferred) Location: Hyderabad, Telangana (Preferred) Work Location: In person
Posted 2 weeks ago
8.0 years
5 - 10 Lacs
Bengaluru
On-site
We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you'll do: We are looking for a Senior Software Engineer – Java to join and strengthen the App2App Integration team within SAP Business Data Cloud. This role is designed to accelerate the integration of SAP’s application ecosystem with its unified data fabric, enabling low-latency, secure and scalable data exchange. You will take ownership of designing and building core integration frameworks that enable real-time, event-driven data flows between distributed SAP systems. As a senior contributor, you will work closely with architects to drive the evolution of SAP’s App2App integration capabilities, with hands-on involvement in Java, ETL and distributed data processing, Apache Kafka, DevOps, SAP BTP and Hyperscaler platforms. Responsibilities: Design and develop App2App integration components and services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Build and maintain scalable data processing and ETL pipelines that support real-time and batch data flows. Integrate data engineering workflows with tools such as Databricks, Spark or other cloud-based processing platforms (experience with Databricks is a strong advantage). Accelerate the App2App integration roadmap by identifying reusable patterns, driving platform automation and establishing best practices. Collaborate with cross-functional teams to enable secure, reliable and performant communication across SAP applications. Build and maintain distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Work closely with DevOps to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Guide cloud-native secure deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Collaborate with SAP’s broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture What you bring: Bachelor’s or Master’s degree in Computer Science, Software Engineering or a related field. 8+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Hands-on experience building ETL pipelines and working with large-scale data processing frameworks. Experience or experimentation with tools such as Databricks, Apache Spark or other cloud-native data platforms is highly advantageous. Familiarity with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Design CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Passionate about clean code, automated testing, performance tuning and continuous improvement. Strong communication skills and ability to collaborate with global teams across time zones Meet your Team: SAP is the market leader in enterprise application software, helping companies of all sizes and industries run at their best. As part of the Business Data Cloud (BDC) organization, the Foundation Services team is pivotal to SAP’s Data & AI strategy, delivering next-generation data experiences that power intelligence across the enterprise. Located in Bangalore, India, our team drives cutting-edge engineering efforts in a collaborative, inclusive and high-impact environment, enabling innovation and integration across SAP’s data platforms #DevT3 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 426958 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team Roku is the No. 1 TV streaming platform in the U.S., Canada, and Mexico with 70+ millions of active accounts. Roku pioneered streaming to the TV and continues to innovate and lead the industry. We believe Roku’s continued success relies on its investment in our machine learning/ML recommendation engine. Roku enables our users to access millions of contents including movies, episodes, news, sports, music and channels from all around the world. About the role We’re on a mission to build cutting-edge advertising technology that empowers businesses to run sustainable and highly-profitable campaigns. The Ad Performance team owns server technologies, data, and cloud services aimed at improving the ad experience. We're looking for seasoned engineers with a background in machine learning to aid in this mission. Examples of problems include improving ad relevance, inferring demographics, yield optimisation, and many more. Employees in this role are expected to apply knowledge of experimental methodologies, statistics, optimisation, probability theory, and machine learning using both general purpose software and statistical languages. What you’ll be doing ML infrastructure: Help build a first-class machine learning platform from the ground up which manages the entire model lifecycle - feature engineering, model training, versioning, deployment, online serving/evaluation, and monitoring prediction quality. Data analysis and feature engineering: Apply your expertise to identify and generate features that can be leveraged by multiple use cases and models. Model training with batch and real-time prediction scenarios: Use machine learning and statistical modelling techniques such as Decision Trees, Logistic Regression, Neural Networks, Bayesian Analysis and others to develop and evaluate algorithms for improving product/system performance, quality, and accuracy. Production operations: Low-level systems debugging, performance measurement, and optimisation on large production clusters. Collaboration with cross-functional teams: Partner with product managers, data scientists, and other engineers to deliver impactful solutions. Staying ahead of the curve: Continuously learn and adapt to emerging technologies and industry trends. We’re excited if you have Bachelors, Masters, or PhD in Computer Science, Statistics, or a related field. Experience in applied machine learning on real use cases (bonus points for ad tech-related use cases). Great coding skills and strong software development experience (we use Spark, Python, Java). Familiarity with real-time evaluation of models with low latency constraints. Familiarity with distributed ML frameworks such as Spark-MLlib, TensorFlow, etc. Ability to work with large scale computing frameworks, data analysis systems, and modelling environments. Examples include Spark, Hive, NoSQL stores such as Aerospike and ScyllaDB. Ad tech background is a plus. #LI-PS2 Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 2 weeks ago
0 years
4 - 5 Lacs
Bengaluru
On-site
Monitor payment systems and transaction flows to ensure uptime and performance Investigate and resolve production issues related to payment processing (e.g., failed transactions, latency, system errors) Collaborate with development and infrastructure teams to troubleshoot and deploy fixes Handle L1/L2 support tickets and escalate critical incidents appropriately Maintain logs, dashboards, and alerts using tools like Splunk, Grafana, or New Relic Perform root cause analysis and document incident resolutions Support batch processing, settlement, and reconciliation operations Participate in on-call rotations and provide 24/7 support coverage when needed Strong knowledge of SQL, XML, and scripting languages (e.g., Python, Shell) Familiarity with payment gateways, POS systems, and banking protocols Experience with incident management tools (e.g., Jira, ServiceNow) Understanding of ITIL processes (Incident, Problem, and Change Management) Basic networking and database troubleshooting skills Ability to work under pressure and manage multiple priorities About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 weeks ago
3.0 years
5 - 10 Lacs
Bengaluru
On-site
About Aerospike Aerospike is the real-time database for mission-critical use cases and workloads, including machine learning, generative, and agentic AI. Aerospike powers millions of transactions per second with millisecond latency, at a fraction of the total cost of ownership compared to other databases. Global leaders, including Adobe, Airtel, Barclays, Criteo, DBS Bank, Experian, Grab, HDFC Bank, PayPal, Sony Interactive Entertainment, The Trade Desk, and Wayfair, rely on Aerospike for customer 360, fraud detection, real-time bidding, profile stores, recommendation engines, and other use cases Headquartered in Mountain View, California, Aerospike has a global presence with offices in London, Bangalore, and Tel Aviv. In Bengaluru we follow hybrid models with mandate two days' work from office. Site Reliability Engineer As a Site Reliability Engineer (SRE) for Aerospike, you will play a crucial role in building and improving the reliability, performance, and scalability of our cloud platform. You will contribute to developing robust infrastructure, implementing monitoring solutions, and ensuring the reliability of our mission-critical cloud infrastructure and services. This role offers excellent opportunities for growth and learning in a fast-paced, innovative environment. Key Responsibilities Deploying, monitoring, and optimizing Aerospike's cloud platform infrastructure and services across multiple environments Developing and enhancing automation and infrastructure-as-code solutions to improve operational efficiency Building monitoring, alerting, and observability implementations to help detect and resolve system issues proactively Participating in incident response activities, learning from post-mortems, and driving continuous improvement initiatives Implementing security best practices for cloud infrastructure and access control Collaborating with development teams to ensure reliable service delivery Participating in on-call rotation, responding to critical incidents and minimizing downtime through proactive mitigation strategies. Creating and maintaining documentation, runbooks, and system configurations for team knowledge sharing Working on capacity planning and performance optimization efforts Enhancing CI/CD pipeline improvements and deployment automation Required Experience 3+ years of experience in Site Reliability Engineering, DevOps, Infrastructure Engineering, or related technical fields Experience with at least one major public cloud provider (AWS, Google Cloud, or Azure) and basic understanding of cloud services Familiarity with infrastructure-as-code tools such as Terraform or CloudFormation Basic experience with CI/CD pipelines and automated deployment practices Understanding of Linux/Unix systems administration and basic networking concepts Experience with scripting languages such as Python, Bash, Go, or similar for automation tasks Exposure to containerization technologies such as Docker and basic Kubernetes concepts Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, CloudWatch, or similar) Strong problem-solving skills and eagerness to learn new technologies Good communication skills and ability to work collaboratively in a team environment Preferred Skills and Qualifications Experience with database systems, preferably NoSQL databases Understanding of basic security practices in cloud environments Familiarity with Aerospike or other distributed databases Industry certifications such as AWS Cloud Practitioner, Google Cloud Associate, or Azure Fundamentals Exposure to configuration management tools (Ansible or similar) Experience with version control systems (Git) and collaborative development practices Aerospike is an Equal Opportunity Employer. We are committed to providing an environment free from discrimination on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status, or any other basis covered by appropriate law.
Posted 2 weeks ago
0 years
8 - 8 Lacs
Chennai
On-site
The engineer will also be responsible for analyzing the results of the tests and providing recommendations for performance improvements. Additionally, this role may also involve working with development teams to optimize the performance of their code and working with system administrators to ensure that the underlying infrastructure is properly configured for optimal performance. This role will usually require strong Java programming skills and experience with microservices, cloud infrastructure and technologies, and performance testing methodologies. Additionally, Engineer should be able to understand the Front end performance metrics and help teams in optimizing the performance scores. Experience in Front end application performance tools like Lighthouse, Web page test, Pagespeed Insights, etc Collaborating with multiple product teams and help in performance tuning of applications. Shift left and first-time quality - Automate Performance testing and integrate it to the existing CI/CD pipelines for a better quality and engineering experience. Performance Testing Tools: Performance testing tools such as JMeter, LoadRunner, and Gatling. Knowledge of Web Technologies: It is essential to have knowledge of web technologies, including HTML, CSS, JavaScript, and HTTP. Strong analytical skills are necessary to interpret data and identify patterns, trends, and issues related to webpage load and performance. Communication Skills: Effective communication skills to collaborate with developers, testers, and other stakeholders to identify and resolve performance issues. The specific responsibilities of a performance engineer managing a large, distributed application built on microservices, spring boot, and Google Cloud may include: Gather performance requirements using templates, logs, and monitoring tools. Work with Product teams to understand workload models for each system and gather performance Requirements. Create performance test plans and scenarios and develop test scripts in JMeter/K6/Gatling to meet the objectives of the performance test plan. Setup performance test and performance regression testing guidelines and standards Conduct system performance testing to ensure system reliability, capacity, and scalability. Perform performance testing like Load Testing, Endurance Testing, Volume Testing, Scalability Testing, Spike Testing, and Stress Testing using Jmeter/Load runner. Perform root cause analysis using performance monitoring/Profiling tools and identifying potential system and resources bottlenecks. Analyze thread dumps, heap dumps, kernel logs, network stats, APM metrics, application logs to troubleshoot CPU/Memory/Resource hot spots, API latency and application/platform health. Experience in Front end application performance tools like Lighthouse, Web page test, Pagespeed Insights, etc Collaborating with multiple product teams and help in performance tuning of applications. Shift left and first-time quality - Automate Performance testing and integrate it to the existing CI/CD pipelines for a better quality and engineering experience. Performance Testing Tools: Performance testing tools such as JMeter, LoadRunner, and Gatling. Knowledge of Web Technologies: It is essential to have knowledge of web technologies, including HTML, CSS, JavaScript, and HTTP. Strong analytical skills are necessary to interpret data and identify patterns, trends, and issues related to webpage load and performance. Communication Skills: Effective communication skills to collaborate with developers, testers, and other stakeholders to identify and resolve performance issues.
Posted 2 weeks ago
6.0 - 9.0 years
6 - 10 Lacs
Chennai
On-site
Company Description Organizations everywhere struggle under the crushing costs and complexities of “solutions” that promise to simplify their lives. To create a better experience for their customers and employees. To help them grow. Software is a choice that can make or break a business. Create better or worse experiences. Propel or throttle growth. Business software has become a blocker instead of ways to get work done. There’s another option. Freshworks With a fresh vision for how the world works. At Freshworks, we build uncomplicated service software that delivers exceptional customer and employee experiences. Our enterprise-grade solutions are powerful, yet easy to use, and quick to deliver results. Our people-first approach to AI eliminates friction, making employees more effective and organizations more productive. Over 72,000 companies, including Bridgestone, New Balance, Nucor, S&P Global, and Sony Music, trust Freshworks customer experience (CX) and employee experience (EX) software to fuel customer loyalty and service efficiency. And, over 4,500 Freshworks employees make this possible, all around the world. Fresh vision. Real impact. Come build it with us. Job Description Overview of the role: As a Lead Software Engineer, you will focus on building next-generation platform services for Freshworks with your strong background in distributed systems and mentor your team to achieve this. You will have an opportunity to redefine customer experiences by building systems that are milli-second efficient, always available and working at internet scale. If you are the kind of engineer who is passionate about building systems, have a good eye for analysis and a mind that can think outside the box, we want to talk to you. Responsibilities: Lead teams to deliver scalable, low latency, and cost-efficient solutions to different product teams. Drive solutions and implementation leveraging different open source distributed systems and deliver a complete product. Build innovative solutions from scratch and liaise with architects and engineers from other product teams to build solutions and drive adoption. Elicit quality attributes of the system as well as create criteria metrics for the product to establish the success of achieved metrics Implement and support compliance of self and team to Freshworks compliance and information security processes. Requirements: 6 – 9 years of relevant professional experience Advanced proficiency in object-oriented programming principles In-depth understanding of the Software Development Lifecycle (SDLC) Demonstrated ability to design scalable and high-performance systems Skilled in conducting peer code reviews Strong analytical and problem-solving abilities Extensive hands-on programming experience Expertise in data structures and algorithms Solid foundation in system design concepts Qualifications Experience / Desired Skills (but not all required): Degree in Computer Science or equivalent practical experience Experience with large-scale systems Intermediate knowledge of Ruby on Rails Prior experience with AWS Experience with open-source projects Experience troubleshooting in a SaaS environment with an assertive deployment schedule Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business.
Posted 2 weeks ago
0.0 - 1.0 years
1 - 1 Lacs
Coimbatore
On-site
We’re looking for self-driven learners, not clock-watchers. If you believe in growth through ownership, are willing to be challenged, and care about your team as much as your task — we’ll give you the space to do your best work. Job Title: Software Developer Specialization: MATLAB, PYTHON Education: B.E., B.Tech., M.E., M.Tech. Experience: 0-1 year Experience as MATLAB/Python Developer or Programmer. Location : Gandhipuram, Coimbatore, TN, INDIA NOTE: CANDIDATES MUST BE READY TO ATTEND DIRECT OFFLINE INTERVIEW IMMEDIATELY. STRICTLY NO ONLINE INTERVIEW. NO TIME WASTERS. Requirements : B.E., B.Tech., M.E., M.Tech. Graduate with 0-1 year of working knowledge in MATLAB, or Python development. Freshers with adequate knowledge can also apply. Salary negotiable for experienced candidates. Should be familiar with different frameworks, notebooks and library functions of Python, MATLAB and Simulink. Java will be added advantage. Real-time Course Certifications must be added, if available. Strong communication skills and technical knowledge as a Data Science Engineer, Machine Learning Engineer, NLP or similar role. Knowledge of Image Processing, Data mining, Big Data, Deep learning, Machine Learning, Artificial intelligence, Network Technologies, Signal Processing, Communications, Power Electronics, etc., will be preferred. Should possess excellent problem-solving capability, effective time management, multitasking, self-starter and self-learner to learn new concepts. First 3 months will be Trainee period followed by two years service agreement with two months notice period. Responsibilities : Writing reusable, testable, and efficient MATLAB, JAVA and Python code for Academic Projects based on IEEE research papers. Should design and implement low-latency and high-availability applications using both MATLAB, and Python. Involved in R&D teams supporting Academic Projects Development and Documentation (Ph.D., MPhil, Engineering, UG/PG Projects). To work effectively in creating innovative and novel ideas for the projects in association with R&D team. Job Type: Full-time Pay: ₹10,000.00 - ₹15,000.00 per month Location Type: In-person Schedule: Day shift Fixed shift Ability to commute/relocate: Coimbatore - 641012, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you agreeing to the 2 years service agreement with the company? What is your expert Language? PYTHON OR JAVA OR MATLAB Education: Bachelor's (Required) Language: Tamil (Required) Work Location: In person
Posted 2 weeks ago
4.0 years
0 Lacs
Ahmedabad
Remote
At SmartBear, we believe building great software starts with quality—and we're helping our customers make that happen every day. Our solution hubs—SmartBear API Hub, SmartBear Insight Hub, and SmartBear Test Hub, featuring HaloAI, bring visibility and automation to software development, making it easier for teams to deliver high-quality software faster. SmartBear is trusted by over 16 million developers, testers, and software engineers at 32,000+ organizations – including innovators like Adobe, JetBlue, FedEx, and Microsoft. . Software Engineer – JAVA Zephyr Enterprise Solve challenging business problems and build highly scalable applications Design, document and implement new systems in Java 8/17 Build microservices, specifically with HTTP, REST, JSON and XML Product intro: Zephyr Enterprise is undergoing a transformation to better align our products to the end users' requirements while maintaining our market leading position and strong brand reputation across the Test Management Vertical. Go to our product page if you want to know more about Zephyr Test Management Products | SmartBear . You can even have a free trial to check it out About the role: As a Software Engineer, you will be integral part of this transformation and will be solving challenging business problems and build highly scalable and available applications that provide an excellent user experience. Reporting into the Lead Engineer you will be required to develop solutions using available tools and technologies and assist the engineering team in problem resolution by hands-on participation, effectively communicate status, issues, and risks in a precise and timely manner. You will write code per product requirements and create new products, create automated tests, contribute in system testing, follow agile mode of development. You will interact with both business and technical stakeholders to deliver high quality products and services that meet business requirements and expectations while applying the latest available tools and technology. Develop scalable real-time low-latency data egress/ingress solutions in an agile delivery method, create automated tests, contribute in system testing, follow agile mode of development. We are looking for someone who can design, document, and implement new systems, as well as enhancements and modifications to existing software with code that complies with design specifications and meets security and Java best practices. Have 4-7 years of experience with hands on experience working in Java 17 platform or higher and hold a Bachelor's Degree in Computer Science, Computer Engineering or related technical field required. API - driven development - Experience working with remote data via SOAP, REST and JSON and in delivering high value projects in Agile (SCRUM) methodology using preferably JIRA tool. Good Understanding of OOAD, Spring Framework and the Microservices based architecture Experience with Applications Performance Tuning, Scaling, Security, Resiliency Best Practices Experience with Relational or NoSQL database, level design and core Java patterns Experience with AWS stack, RDS, S3, Elastic cache, SSDLC, Agile methodologies and development experience in a SCRUM environment. Experience with Messaging Queue preferably RabbitMQ, ActiveMQ/Artemis. Experience with Atlassian suite of Products and the related ecosystem of Plugins Experience with React, JavaScript good to have. Why you should join the SmartBear crew: You can grow your career at every level. We invest in your success as well as the spaces where our teams come together to work, collaborate, and have fun. We love celebrating our SmartBears; we even encourage our crew to take their birthdays off. We are guided by a People and Culture organization - an important distinction for us. We think about our team holistically – the whole person. We celebrate our differences in experiences, viewpoints, and identities because we know it leads to better outcomes. Did you know: Our main goal at SmartBear is to make our technology-driven world a better place. SmartBear is committed to ethical corporate practices and social responsibility, promoting good in all the communities we serve. SmartBear is headquartered in Somerville, MA with offices across the world including Galway Ireland, Bath, UK, Wroclaw, Poland and Bangalore, India. We've won major industry (product and company) awards including B2B Innovators Award, Content Marketing Association, IntellyX Digital Innovator and Built-in Best Places to Work. SmartBear is an equal employment opportunity employer and encourages success based on our individual merits and abilities without regard to race, color, religion, gender, national origin, ancestry, mental or physical disability, marital status, military or veteran status, citizenship status, age, sexual orientation, gender identity or expression, genetic information, medical condition, sex, sex stereotyping, pregnancy (which includes pregnancy, childbirth, and medical conditions related to pregnancy, childbirth, or breastfeeding), or any other legally protected status.
Posted 2 weeks ago
25.0 years
5 - 9 Lacs
Noida
On-site
NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It’s a unique legacy of innovation that’s fueled by great technology—and amazing people. Today, we’re tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what’s never been done before takes vision, innovation, and the world’s best talent. As an NVIDIAN, you’ll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. NVIDIA is looking for a passionate member to join our DGX Cloud Engineering Team as a Sr. Site Reliability Engineer. In this role, you will play a significant part in helping to craft and guide the future of AI & GPUs in the Cloud. NVIDIA DGX Cloud is a cloud platform tailored for AI tasks, enabling organizations to transition AI projects from development to deployment in the age of intelligent AI. Are you passionate about cloud software development and strive for quality? Do you pride yourself in building cloud-scale software systems? If so, join our team at NVIDIA, where we are dedicated to delivering GPU-powered services around the world! What you'll be doing: You will play a crucial role in ensuring the success of the Omniverse on DGX Cloud platform by helping to build our deployment infrastructure processes, creating world-class SRE measurement and creating automation tools to improve efficiency of operations, and maintaining a high standard of perfection in service operability and reliability. Design, build, and implement scalable cloud-based systems for PaaS/IaaS. Work closely with other teams on new products or features/improvements of existing products. Develop, maintain and improve cloud deployment of our software. Participate in the triage & resolution of complex infra-related issues Collaborate with developers, QA and Product teams to establish, refine and streamline our software release process, software observability to ensure service operability, reliability, availability. Maintain services once live by measuring and monitoring availability, latency, and overall system health using metrics, logs, and traces Develop, maintain and improve automation tools that can help improve efficiency of SRE operations Practice balanced incident response and blameless postmortems Be part of an on-call rotation to support production systems What we need to see: BS or MS in Computer Science or equivalent program from an accredited University/College. 8+ years of hands-on software engineering or equivalent experience. Demonstrate understanding of cloud design in the areas of virtualization and global infrastructure, distributed systems, and security. Expertise in Kubernetes (K8s) & KubeVirt and building RESTful web services. Understanding of building AI Agentic solutions preferably Nvidia open source AI solutions. Demonstrate working experiences in SRE principles like metrics emission for observability, monitoring, alerting using logs, traces and metrics Hands on experience working with Docker, Containers and Infrastructure as a Code like terraform deployment CI/CD. Exhibit knowledge in concepts of working with CSPs, for example: AWS (Fargate, EC2, IAM, ECR, EKS, Route53 etc...), Azure etc. Ways to stand out from the crowd: Expertise in technologies such as Stack-storm, OpenStack, Redhat OpenShift, AI DBs like Milvus. A track record of solving complex problems with elegant solutions. Prior experience with Go & Python, React. Demonstrate delivery of complex projects in previous roles. Showcase ability in developing Frontend application with concepts of SSA, RBAC We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description The engineer will also be responsible for analyzing the results of the tests and providing recommendations for performance improvements. Additionally, this role may also involve working with development teams to optimize the performance of their code and working with system administrators to ensure that the underlying infrastructure is properly configured for optimal performance. This role will usually require strong Java programming skills and experience with microservices, cloud infrastructure and technologies, and performance testing methodologies. Additionally, Engineer should be able to understand the Front end performance metrics and help teams in optimizing the performance scores. Responsibilities The specific responsibilities of a performance engineer managing a large, distributed application built on microservices, spring boot, and Google Cloud may include: Gather performance requirements using templates, logs, and monitoring tools. Work with Product teams to understand workload models for each system and gather performance Requirements. Create performance test plans and scenarios and develop test scripts in JMeter/K6/Gatling to meet the objectives of the performance test plan. Setup performance test and performance regression testing guidelines and standards Conduct system performance testing to ensure system reliability, capacity, and scalability. Perform performance testing like Load Testing, Endurance Testing, Volume Testing, Scalability Testing, Spike Testing, and Stress Testing using Jmeter/Load runner. Perform root cause analysis using performance monitoring/Profiling tools and identifying potential system and resources bottlenecks. Analyze thread dumps, heap dumps, kernel logs, network stats, APM metrics, application logs to troubleshoot CPU/Memory/Resource hot spots, API latency and application/platform health. Experience in Front end application performance tools like Lighthouse, Web page test, Pagespeed Insights, etc Collaborating with multiple product teams and help in performance tuning of applications. Shift left and first-time quality - Automate Performance testing and integrate it to the existing CI/CD pipelines for a better quality and engineering experience. Performance Testing Tools: Performance testing tools such as JMeter, LoadRunner, and Gatling. Knowledge of Web Technologies: It is essential to have knowledge of web technologies, including HTML, CSS, JavaScript, and HTTP. Strong analytical skills are necessary to interpret data and identify patterns, trends, and issues related to webpage load and performance. Communication Skills: Effective communication skills to collaborate with developers, testers, and other stakeholders to identify and resolve performance issues. Qualifications Experience in Front end application performance tools like Lighthouse, Web page test, Pagespeed Insights, etc Collaborating with multiple product teams and help in performance tuning of applications. Shift left and first-time quality - Automate Performance testing and integrate it to the existing CI/CD pipelines for a better quality and engineering experience. Performance Testing Tools: Performance testing tools such as JMeter, LoadRunner, and Gatling. Knowledge of Web Technologies: It is essential to have knowledge of web technologies, including HTML, CSS, JavaScript, and HTTP. Strong analytical skills are necessary to interpret data and identify patterns, trends, and issues related to webpage load and performance. Communication Skills: Effective communication skills to collaborate with developers, testers, and other stakeholders to identify and resolve performance issues.
Posted 2 weeks ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Comfort level in following Python project management best practices (use of setup.py, logging, pytests, relative module imports,sphinx docs,etc.,) Familiarity in use of Github (clone, fetch, pull/push,raising issues and PR, etc.,) High familiarity in the use of DL theory/practices in NLP applications Comfort level to code in Huggingface, LangChain, Chainlit, Tensorflow and/or Pytorch, Scikit-learn, Numpy and Pandas Comfort level to use two/more of open source NLP modules like SpaCy, TorchText, fastai.text, farm-haystack, and others Knowledge in fundamental text data processing (like use of regex, token/word analysis, spelling correction/noise reduction in text, segmenting noisy unfamiliar sentences/phrases at right places, deriving insights from clustering, etc.,) Have implemented in real-world BERT/or other transformer fine-tuned models (Seq classification, NER or QA) from data preparation, model creation and inference till deployment Use of GCP services like BigQuery, Cloud function, Cloud run, Cloud Build, VertexAI, Good working knowledge on other open source packages to benchmark and derive summary Experience in using GPU/CPU of cloud and on-prem infrastructures Skillset to leverage cloud platform for Data Engineering, Big Data and ML needs. Use of Dockers (experience in experimental docker features, docker-compose, etc.,) Familiarity with orchestration tools such as airflow, Kubeflow Experience in CI/CD, infrastructure as code tools like terraform etc. Kubernetes or any other containerization tool with experience in Helm, Argoworkflow, etc., Ability to develop APIs with compliance, ethical, secure and safe AI tools. Good UI skills to visualize and build better applications using Gradio, Dash, Streamlit, React, Django, etc., Deeper understanding of javascript, css, angular, html, etc., is a plus. Responsibilities Design NLP/LLM/GenAI applications/products by following robust coding practices, Explore SoTA models/techniques so that they can be applied for automotive industry usecases Conduct ML experiments to train/infer models; if need be, build models that abide by memory & latency restrictions, Deploy REST APIs or a minimalistic UI for NLP applications using Docker and Kubernetes tools Showcase NLP/LLM/GenAI applications in the best way possible to users through web frameworks (Dash, Plotly, Streamlit, etc.,) Converge multibots into super apps using LLMs with multimodalities Develop agentic workflow using Autogen, Agentbuilder, langgraph Build modular AI/ML products that could be consumed at scale. Data Engineering: Skillsets to perform distributed computing (specifically parallelism and scalability in Data Processing, Modeling and Inferencing through Spark, Dask, RapidsAI or RapidscuDF) Ability to build python-based APIs (e.g.: use of FastAPIs/ Flask/ Django for APIs) Experience in Elastic Search and Apache Solr is a plus, vector databases. Qualifications Education : Bachelor’s or Master’s Degree in Computer Science, Engineering, Maths or Science Performed any modern NLP/LLM courses/open competitions is also welcomed.
Posted 2 weeks ago
5.0 years
10 - 15 Lacs
Bengaluru, Karnataka, India
On-site
This role is for one of Weekday's clients Salary range: Rs 1000000 - Rs 1500000 (ie INR 10-15 LPA) Min Experience: 5 years Location: Bengaluru JobType: full-time Requirements We are seeking a highly skilled and experienced Computer Vision Engineer to join our growing AI team. This role is ideal for someone with strong expertise in deep learning and a solid background in real-time video analytics, model deployment, and computer vision applications. You'll be responsible for developing scalable computer vision pipelines and deploying them across cloud and edge environments, helping build intelligent visual systems that solve real-world problems. Key Responsibilities: Model Development & Training: Design, train, and optimize deep learning models for object detection, segmentation, and tracking using frameworks like YOLO, UNet, Mask R-CNN, and Deep SORT. Computer Vision Applications: Build robust pipelines for computer vision applications including image classification, real-time object tracking, and video analytics using OpenCV, NumPy, and TensorFlow/PyTorch. Deployment & Optimization: Deploy trained models on Linux-based GPU systems and edge devices (e.g., Jetson Nano, Google Coral), ensuring low-latency performance and efficient hardware utilization. Real-Time Inference: Implement and optimize real-time inference systems, ensuring minimal delay in video processing pipelines. Model Management: Utilize tools like Docker, Git, and MLflow (or similar) for version control, environment management, and model lifecycle tracking. Collaboration & Documentation: Work cross-functionally with hardware, backend, and software teams. Document designs, architectures, and research findings to ensure reproducibility and scalability. Technical Expertise Required: Languages & Libraries: Advanced proficiency in Python and solid experience with OpenCV, NumPy, and other image processing libraries. Deep Learning Frameworks: Hands-on experience with TensorFlow, PyTorch, and integration with model training pipelines. Computer Vision Models: Object Detection: YOLO (all versions) Segmentation: UNet, Mask R-CNN Tracking: Deep SORT or similar Deployment Skills: Real-time video analytics implementation and optimization Experience with Docker for containerization Version control using Git Model tracking using MLflow or comparable tools Platform Experience: Proven experience in deploying models on Linux-based GPU environments and edge devices (e.g., NVIDIA Jetson family, Coral TPU). Professional & Educational Requirements: Education: B.E./B.Tech/M.Tech in Computer Science, Electrical Engineering, or related discipline. Experience: Minimum 5 years of industry experience in AI/ML with a strong focus on computer vision and system-level design. Proven portfolio of production-level projects in image/video processing or real-time systems. Preferred Qualities: Strong problem-solving and debugging skills Excellent communication and teamwork capabilities A passion for building smart, scalable vision systems A proactive and independent approach to research and implementation
Posted 2 weeks ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics.
Posted 2 weeks ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Candidates who are ready to come for Face to face interview -Open to apply 5 Days Working from office RESPONSIBILITIES * Architect and implement REST APIs. * Design and implement low-latency, high-availability, and performant applications. * Lead integration of GenAI-powered capabilities into backend systems, including prompt-based APIs and tool-using agents. * Test software to ensure responsiveness, correctness and efficiency. * Collaborate with front-end developers on the integration of elements. * Take initiatives to build better and faster solutions to the problems of scale. * Troubleshoot application-related issues and work with the infrastructure team to triage Major Incidents. * Experience in analyzing/researching solutions and developing/implementing recommendations accordingly. REQUIREMENTS * Bachelor’s Degree in Computer Science or a related field. * 4+ years of professional experience. * Expertise in JavaScript (ES6) and Node JS is a must. * Strong knowledge of algorithms and data structures. * In-depth knowledge of frameworks like Express JS and Restify. * Experience in building backend services for handling data at large scale is a must. * Ability to architect high-availability applications and servers on cloud adhering to best practices. Microservices architecture is preferable. * Experience working with MySQL and NoSQL databases like DynamoDB and MongoDB. * Experience with LangChain, LangGraph, CrewAI, or equivalent GenAI agentic frameworks is a strong plus. * Experience writing complex SQL queries. * In-depth knowledge of Node JS concepts like the event loop. * Ability to perform technical deep-dives into code. * Experience building and deploying GenAI-powered backend services or tools (e.g., prompt routers, embedding search, RAG pipelines). * Understanding of communication using WebSockets. * Good understanding of clean architecture and SOLID principles. * Basic knowledge of AWS and Docker containerization is preferable. * Experience in Python is a plus. * Experience with AI-assisted development tools (e.g., Cursor, GitHub Copilot) to accelerate development, assist in code reviews, and support efficient coding practices is a plus. * Familiarity with Git.
Posted 2 weeks ago
12.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Required Skills & Experience: 7–12 years of hands-on recruitment experience with at least 3+ years in BFSI/fintech/stock broking . Proven ability to close niche and critical roles (e.g., quant, algo devs, traders, compliance, sales leaders). Lead the entire recruitment lifecycle from requisition to onboarding for tech, trading, sales, research, and support roles. Strategise and execute hiring plans aligned with business objectives and headcount forecasts. Drive niche hiring for quant roles, algo trading, low-latency development, and broking operations. Manage and mentor a team of recruiters and recruitment partners. Implement and optimise ATS, dashboards , and recruitment automation tools. Build and nurture a pipeline of high-potential candidates through proactive sourcing, referrals, and headhunting. Manage campus hiring from premier institutions (IITs, ISI, NITs, BITS, IIMs, etc.) Own recruitment KPIs such as TAT, cost-per-hire, quality-of-hire, and offer-to-join ratio. Ensure compliance with hiring processes, internal audits, and SEBI/regulatory guidelines If you're interested or know someone suitable, please email your resume to: 📧 nayan.ray@impetusconsultants.com
Posted 2 weeks ago
12.0 years
0 Lacs
Thane, Maharashtra, India
On-site
We are looking for a Director of Engineering (AI Systems & Secure Platforms) to join our Core Engineering team at Thane (Maharashtra - India). The ideal candidate should have 12-15+ years of experience in architecting and deploying AI systems at scale, with deep expertise in agentic AI workflows , LLMs, RAG, Computer Vision, and secure mobile/wearable platforms. Join us to craft the next generation of smart eyewear—by leading intelligent, autonomous, real-time workflows that operate seamlessly at the edge. Read more here: The smartphone era is peaking. The next computing revolution is here. Top 3 Daily Tasks: Architect, optimize, and deploy LLMs, RAG pipelines, and Computer Vision models for smart glasses and other edge devices Design and orchestrate agentic AI workflows—enabling autonomous agents with planning, tool usage, error handling, and closed feedback loops Collaborate across AI, Firmware, Security, Mobile, Product, and Design teams to embed "invisible intelligence" within secure wearable systems Minimum Work Experience Required: 12-15+ years of experience in Applied AI, Deep Learning, Edge AI deployment, Secure Mobile Systems, and Agentic AI Architecture . Top 5 Skills You Should Possess: Expertise in TensorFlow, PyTorch, HuggingFace, ONNX, and optimization tools like TensorRT, TFLite Strong hands-on experience with LLMs, Retrieval-Augmented Generation (RAG), and Vector Databases (FAISS, Milvus) Deep understanding of Android/iOS integration, AOSP customization, and secure communication (WebRTC, SIP, RTP) Experience in Privacy-Preserving AI (Federated Learning, Differential Privacy) and secure AI APIs Proven track record in architecting and deploying agentic AI systems—multi-agent workflows, adaptive planning, tool chaining, and MCP (Model Context Protocol) Cross-Functional Collaboration Excellence: Partner with Platform & Security teams to define secure MCP server blueprints exposing device tools, sensors, and services with strong governance and traceability Coordinate with Mobile and AI teams to integrate agentic workflows across Android, iOS, and AOSP environments Work with Firmware and Product teams to define real-time sensor-agent interactions, secure data flows, and adaptive behavior in smart wearables What You'll Be Creating: Agentic, MCP-enabled pipelines for smart glasses—featuring intelligent agents for vision, context, planning, and secure execution Privacy-first AI systems combining edge compute, federated learning, and cloud integration A scalable, secure wearable AI platform that reflects our commitment to building purposeful and conscious technology Preferred Skills: Familiarity with secure real-time protocols: WebRTC, SIP, RTP Programming proficiency in C, C++, Java, Python, Swift, Kotlin, Objective-C, Node.js, Shell Scripting, CUDA (preferred) Experience designing AI platforms for wearables/XR with real-time and low-latency constraints Deep knowledge of MCP deployment patterns—secure token handling, audit trails, permission governance Proven leadership in managing cross-functional tech teams across AI, Firmware, Product, Mobile, and Security
Posted 2 weeks ago
3.0 years
8 - 12 Lacs
Gandhinagar, Gujarat, India
On-site
Job Title: Python/Django Developer Location: Gandhinagar GIFT City Job Type: Full-Time (Hybrid) Experience: 3+ Years Job Summary We are seeking a skilled Python/Django Developer to join our team. The ideal candidate will be responsible for managing the interchange of data between the server and the users. Your primary focus will be the development of all server-side logic, ensuring high performance and responsiveness to front-end requests. You will also work closely with front-end developers to integrate user-facing elements into the application. A basic understanding of front-end technologies is required. Key Responsibilities Develop and maintain efficient, reusable, and reliable Python code. Design and implement low-latency, high-availability, and performant applications. Integrate user-facing elements developed by front-end developers with server-side logic. Ensure security and data protection standards are implemented. Integrate data storage solutions such as MySQL and MongoDB.Optimize applications for maximum speed and scalability. Collaborate with other team members and stakeholders to develop scalable solutions. Write unit and integration tests to ensure software quality. Debug and resolve application issues promptly. Maintain code integrity and organization using version control tools like Git. Key Requirements Proficiency in Python with hands-on experience in at least one web framework such as Django or Flask. Strong knowledge of Object Relational Mapper (ORM) libraries. Experience integrating multiple data sources and databases into one system. Understanding of Python’s threading limitations and multi-process architecture. Good understanding of server-side templating languages such as Jinja2 or Mako. Basic knowledge of front-end technologies like JavaScript, HTML5, and CSS3. Strong grasp of security and data protection best practices. Experience with user authentication and authorization across multiple systems, servers, and environments. Solid understanding of fundamental design principles for scalable applications. Experience with event-driven programming in Python. Ability to design and implement MySQL database schemas that support business processes. Strong unit testing and debugging skills. Proficiency in Git for code versioning and collaboration. Preferred Qualifications Experience with cloud platforms like AWS, Azure, or Google Cloud. Familiarity with containerization tools like Docker. Knowledge of RESTful APIs and microservices architecture. Experience working in Agile development environments. Skills: azure,aws lambda,backend apis,mako,backend development,css3,docker,django,google cloud,aws,amazon web services (aws),javascript,git,restful architecture,jinja2,python,mongodb,devops,microservices,html5,flask,restful apis,mongodb inc.,python scripting,mysql
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France