Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title : RPA Developer (Entry Level) Experience : 1-4 Years Location : Pune Skills : UiPath, Automation Anywhere (Basics) Employment Type : Full-Time Job Description We are seeking a motivated and detail-oriented Entry-Level RPA Developer with 14 years of experience to join our automation team in Pune. The ideal candidate should have hands-on experience in UiPath and a basic understanding of Automation Anywhere platforms. You will be responsible for developing, testing, and maintaining RPA workflows and solutions to automate business processes across various departments. This is an excellent opportunity to grow your career in Robotic Process Automation while working on impactful projects in a collaborative environment. Roles And Responsibilities Analyze business processes to identify automation opportunities. Design, develop, test, and deploy RPA bots using UiPath. Support the configuration of bots with Automation Anywhere for basic tasks as required. Collaborate with business analysts and process owners to understand process workflows. Create and maintain technical documentation for RPA processes. Perform code reviews, testing, and debugging of RPA solutions. Monitor bots in production and handle incidents or enhancements. Maintain RPA platform best practices and follow SDLC procedures. Assist in evaluating and implementing new automation tools and solutions. Provide support for deployed bots, troubleshoot and resolve issues promptly. Key Skills Required 1- 4 years of hands-on experience in RPA development using UiPath. Basic working knowledge of Automation Anywhere. Strong analytical and problem-solving skills. Familiarity with workflow design, exception handling, and logging in RPA. Knowledge of scripting (VBScript, Python, or JavaScript) is a plus. Understanding of APIs, SQL, and web services is an advantage. Good communication and documentation skills. Ability to work independently as well as in a team environment. Preferred Qualifications UiPath RPA Developer Foundation or Advanced Certification. Bachelors degree in Computer Science, Engineering, or a related field. Exposure to Agile methodology or DevOps tools is a plus. (ref:hirist.tech)
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
haryana
On-site
You lead the way. We've got your back. With the right backing, people and businesses have the power to progress in incredible ways. When you join Team Amex, you become part of a global and diverse community of colleagues with an unwavering commitment to back our customers, communities, and each other. Here, you'll learn and grow as we help you create a career journey that's unique and meaningful to you with benefits, programs, and flexibility that support you personally and professionally. At American Express, you'll be recognized for your contributions, leadership, and impact. Every colleague has the opportunity to share in the company's success. Together, we'll win as a team, striving to uphold our company values and powerful backing promise to provide the world's best customer experience every day. And we'll do it with the utmost integrity, in an environment where everyone is seen, heard, and feels like they belong. Join Team Amex and let's lead the way together. About Enterprise Architecture: Enterprise Architecture is an organization within the Chief Technology Office at American Express and is a key enabler of the company's technology strategy. The four pillars of Enterprise Architecture include: - Architecture as Code: This pillar owns and operates foundational technologies leveraged by engineering teams across the enterprise. - Architecture as Design: This pillar includes the solution and technical design for transformation programs and business critical projects requiring architectural guidance and support. - Governance: Responsible for defining technical standards and developing innovative tools that automate controls to ensure compliance. - Colleague Enablement: Focused on colleague development, recognition, training, and enterprise outreach. What you will be working on: We are looking for a Senior Engineer to join our Enterprise Architecture team. In this role, you will be designing and implementing highly scalable real-time systems following best practices and using cutting-edge technology. This role is best suited for experienced engineers with a broad skillset who are open, curious, and willing to learn. Qualifications: What you will Bring: - Bachelor's degree in computer science, computer engineering, or a related field, or equivalent experience. - 10+ years of progressive experience demonstrating strong architecture, programming, and engineering skills. - Firm grasp of data structures, algorithms with fluency in programming languages like Java, Kotlin, Go. - Ability to lead, partner, and collaborate cross-functionally across engineering organizations. - Experience in building real-time large-scale, high-volume, distributed data pipelines on top of data buses (Kafka). - Hands-on experience with large-scale distributed NoSQL databases like Elasticsearch. - Knowledge and/or experience with containerized environments, Kubernetes, Docker. - Knowledge and/or experience with public cloud platforms like AWS, GCP. - Experience in implementing and maintaining highly scalable microservices in Rest, GRPC. - Experience in working with infrastructure layers like service mesh, Istio, Envoy. - Appetite for trying new things and building rapid POCs. Preferred Qualifications: - Knowledge of Observability concepts like Tracing, Metrics, Monitoring, Logging. - Knowledge of Prometheus. - Knowledge of OpenTelemetry / OpenTracing. - Knowledge of observability tools like Jaeger, Kibana, Grafana, etc. - Open-source community involvement. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones" physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: - Competitive base salaries. - Bonus incentives. - Support for financial well-being and retirement. - Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location). - Flexible working model with hybrid, onsite, or virtual arrangements depending on role and business need. - Generous paid parental leave policies (depending on your location). - Free access to global on-site wellness centers staffed with nurses and doctors (depending on location). - Free and confidential counseling support through our Healthy Minds program. - Career development and training opportunities. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.,
Posted 1 week ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Amex GBT is a place where colleagues find inspiration in travel as a force for good and – through their work – can make an impact on our industry. We’re here to help our colleagues achieve success and offer an inclusive and collaborative culture where your voice is valued. We are seeking a hands-on, detail-oriented Revenue Recognition Manager based in India to join our Global Revenue Recognition team. This role will support both the implementation and ongoing operations of our new revenue automation platform. It combines deep technical accounting expertise, system implementation experience, and operational execution to ensure timely, accurate, and ASC 606-compliant revenue recognition. Reporting to the Director of Revenue Recognition, who is based in Canada, the ideal candidate is a proactive self-starter and strong collaborator with a proven ability to work across systems and functions. You will help build scalable revenue operations, implement robust controls, and drive automation excellence. What You'll Do: Partner with the implementation team, Digital Controllership, and Project Admin to translate ASC 606 policies into detailed system requirements. Review and validate design documents to ensure alignment with ASC 606 policies and business requirements. Validate the configuration of charge models, allocations, and recognition schedules in test environments. Execute test scripts (unit, system, UAT) for revenue processes, logging and tracking issues through resolution. Document “as-built” processes, data flows, and user procedures to support transition to business-as-usual operations. Own the month-end revenue recognition cycle: load contracts, run recognition jobs, and generate journal entries. Review and validate contract profiling to ensure contracts are accurately represented in the revenue system. Validate system outputs and accrual calculations, ensuring accuracy of revenue transactions flowing into the general ledger system and proper cutover during month-end close. Evaluate FP&A inputs (e.g., estimates, assumptions) used in revenue accruals for reasonableness, supportability, and auditability. Monitor and validate revenue roll-forward schedules, backlog reports, and performance-obligation reports generated by the system. Review and interpret complex contracts to identify accounting issues and determine appropriate ASC 606 treatment. Update and maintain revenue recognition accounting policies. Support quarterly external reporting, especially reviewing and drafting revenue disclosures. Support enhancement and documentation of SOX controls related to revenue recognition in the future-state environment. Collaborate with Internal Audit and external auditors by providing walkthroughs, evidence, and control documentation. What We're Looking For: Bachelor’s degree or equivalent experience in accounting, Finance, or a related field. CPA or CA required. 6+ years of progressive accounting experience with a focus on revenue recognition and technical accounting. Strong technical knowledge of ASC 606 principles and application. Proven experience with SOX controls related to revenue recognition. Prior experience at a Big 4 accounting firm is a plus. Hands-on experience with revenue automation platforms preferred (Zuora Revenue highly desirable; experience with NetSuite ARM or similar is a plus). Experience in system implementations or large-scale process transformations. Excellent analytical, problem-solving, and communication skills. Proven ability to collaborate effectively in a matrixed, fast-paced environment. Strong organizational skills with the ability to manage multiple priorities and meet deadlines. Strategic problem solver with leadership capabilities and a demonstrated ability to drive process improvements and change. Location Gurgaon, India The #TeamGBT Experience Work and life: Find your happy medium at Amex GBT. Flexible benefits are tailored to each country and start the day you do. These include health and welfare insurance plans, retirement programs, parental leave, adoption assistance, and wellbeing resources to support you and your immediate family. Travel perks: get a choice of deals each week from major travel providers on everything from flights to hotels to cruises and car rentals. Develop the skills you want when the time is right for you, with access to over 20,000 courses on our learning platform, leadership courses, and new job openings available to internal candidates first. We strive to champion Inclusion in every aspect of our business at Amex GBT. You can connect with colleagues through our global INclusion Groups, centered around common identities or initiatives, to discuss challenges, obstacles, achievements, and drive company awareness and action. And much more! All applicants will receive equal consideration for employment without regard to age, sex, gender (and characteristics related to sex and gender), pregnancy (and related medical conditions), race, color, citizenship, religion, disability, or any other class or characteristic protected by law. Click Here for Additional Disclosures in Accordance with the LA County Fair Chance Ordinance. Furthermore, we are committed to providing reasonable accommodation to qualified individuals with disabilities. Please let your recruiter know if you need an accommodation at any point during the hiring process. For details regarding how we protect your data, please consult the Amex GBT Recruitment Privacy Statement. What if I don’t meet every requirement? If you’re passionate about our mission and believe you’d be a phenomenal addition to our team, don’t worry about “checking every box;" please apply anyway. You may be exactly the person we’re looking for!
Posted 1 week ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description 66degrees is seeking a Senior Consultant with specialized expertise in AWS, Resources will lead and scale cloud infrastructure, ensuring high availability, automation, and security across AWS, GCP and Kubernetes environments. You will be responsible for designing and maintaining highly scalable, resilient, and cost- optimized infrastructure while implementing best-in-class DevOps practices, CI/CD pipelines, and observability solutions. As a key part of our clients platform engineering team, you will collaborate closely with developers, SREs, and security teams to automate workflows, optimize cloud performance, and build the backbone of their microservices candidates should have the ability to overlap with US working hours, be open to occasional weekend work and be local to offices in either Noida, or Gurgaon, India as this is an in-office opportunity. Qualifications 7+ years of hands-on DevOps experience with proven expertise in AWS; involvement in SRE or Platform Engineering roles is desirable. Experience handling high-throughput workloads with occasional spikes. Prior industry experience with live sports and media streaming. Deep knowledge of Kubernetes architecture, managing workloads, networking, RBAC and autoscaling is required. Expertise in AWS Platform with hands-on VCP, IAM, EC2, Lambda, RDS, EKS and S3 experience is required; the ability to learn GCP with GKE is desired. Experience with Terraform for automated cloud provisioning; Helm is desired. Experience with FinOps principles for cost-optimization in cloud environments is required. Hands-on experience building highly automated CI/CD pipelines using Jenkins, ArgoCD, and GitHub Actions. Hands-on experience with service mesh technologies (Istio, Linkerd, Consul) is required. Knowledge of monitoring tools such as CloudWatch, Google Logging, and distributed tracing tools like Jaeger; experience with Prometheus and Grafana is desirable. Proficiency in Python and/or Go for automation, infrastructure tooling, and performance tuning is highly desirable. Strong knowledge of DNS, routing, load balancing, VPN, firewalls, WAF, TLS, and IAM. Experience managing MongoDB, Kafka or Pulsar for large-scale data processing is desirable. Proven ability to troubleshoot production issues, optimize system performance, and prevent downtime. Knowledge of multi-region disaster recovery and high-availability architectures. Desired Contributions to open-source DevOps projects or strong technical blogging presence. Experience with KEDA-based autoscaling in Kubernetes. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Join us as a Platform Engineer- Business Intelligence Admin at Barclays, where you'll take part in the evolution of our digital landscape, driving innovation and excellence. You'll harness cutting-edge technology to revolutionize our digital offerings, ensuring unparalleled customer experiences. As a part of the team, you will deliver technology stack, using strong analytical and problem solving skills to understand the business requirements and deliver quality solutions. You'll be working on complex technical problems that will involve detailed analytical skills and analysis. This will be done in conjunction with fellow engineers, business analysts and business stakeholders. To be successful as a Platform Engineer- Business Intelligence Admin you should have experience with: Specialist application developments to build complex business requirements and solve critical issues in web applications built on Java/J2EE and REST/API Frameworks. Must have end-to-end development experience of web applications. Develop solutions on Java and REST/API Frameworks with full lifecycle implementation of the technical components. Experience on DevOps and infrastructure deployments. Ensure all required deliverables are produced in a timely manner to a high specification adhering to Barclays standards with full documentation that will allow a successful transition at the end of the project to the support team. Provide innovative solutions to the business needs by delivering and assisting in managing all stages of a project including demonstrations, design, development and implementation. DevOps specialist concentrated at ensuring stable performance and uninterrupted availability of high-load apps across large-scale systems. Experienced DevOps engineer able to introduce the continuous delivery and continuous integration workflow, which requires the understanding of the mentioned tools and the knowledge of several programming languages. Broadly tasks comprises as CI/CD management and automation, work with automation services and platform, scripting knowledge. Should have good hands on and use best practices for Security and controls. Easily identify and understand team management, deployment frequency, delivery timelines and lead time. A deep understanding of automation tools is required. Should have hands on and/ or in depth knowlegde of following: SCM (GIT, Bitbucket, GITHub/ Gitlab). Excellent scripting knowledge and ability to design through Shell, Python, or Yaml. Build (Ant/Maven/TFS). Repository management (Gitlab, Docker, Nexus). Containerization (Docker, Swarm, Kubernetes). Logging and monitoring (ELK, Splunk, New Relic, Nagios). Continuous Integration (Jenkins). Configuration Management and Environments Provisioning / Orchestration (Ansible, Chef). Collaboration, Workflow and project/Issue management (Jira, Teamcity). IaaS/PaaS Open shift and Cloud Platforms (AWS/ Azure) . You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. This role is based in Pune. Purpose of the role To build and maintain the systems that collect, store, process, and analyse data, such as data pipelines, data warehouses and data lakes to ensure that all data is accurate, accessible, and secure. Accountabilities Build and maintenance of data architectures pipelines that enable the transfer and processing of durable, complete and consistent data. Design and implementation of data warehoused and data lakes that manage the appropriate data volumes and velocity and adhere to the required security measures. Development of processing and analysis algorithms fit for the intended data complexity and volumes. Collaboration with data scientist to build and deploy machine learning models. Analyst Expectations Will have an impact on the work of related teams within the area. Partner with other functions and business areas. Takes responsibility for end results of a team’s operational processing and activities. Escalate breaches of policies / procedure appropriately. Take responsibility for embedding new policies/ procedures adopted due to risk mitigation. Advise and influence decision making within own area of expertise. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how own sub-function integrates with function, alongside knowledge of the organisations products, services and processes within the function. Demonstrate understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function. Make evaluative judgements based on the analysis of factual information, paying attention to detail. Resolve problems by identifying and selecting solutions through the application of acquired technical experience and will be guided by precedents. Guide and persuade team members and communicate complex / sensitive information. Act as contact point for stakeholders outside of the immediate function, while building a network of contacts outside team and external to the organisation. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.
Posted 1 week ago
5.0 - 13.0 years
0 Lacs
pune, maharashtra
On-site
You are a highly skilled and experienced Cloud Architect/Engineer with deep expertise in Google Cloud Platform (GCP). Your primary responsibility is to design, build, and manage scalable and reliable cloud infrastructure on GCP. You will leverage various GCP services such as Compute Engine, Cloud Run, BigQuery, Pub/Sub, Cloud Functions, Dataflow, Dataproc, IAM, and Cloud Storage to ensure high-performance cloud solutions. Your role also includes developing and maintaining CI/CD pipelines, automating infrastructure deployment using Infrastructure as Code (IaC) principles, and implementing best practices in cloud security, monitoring, performance tuning, and logging. Collaboration with cross-functional teams to deliver cloud solutions aligned with business objectives is essential. You should have 5+ years of hands-on experience in cloud architecture and engineering, with at least 3 years of practical experience on Google Cloud Platform (GCP). In-depth expertise in GCP services mentioned above is required. Strong understanding of networking, security, containerization (Docker, Kubernetes), and CI/CD pipelines is essential. Experience with monitoring, performance tuning, and logging in cloud environments is preferred. Familiarity with DevSecOps practices and tools such as HashiCorp Vault is a plus. Your role as a GCP Cloud Architect/Engineer will contribute to ensuring system reliability, backup, and disaster recovery strategies. This hybrid role is based out of Pune and requires a total of 10 to 13 years of relevant experience.,
Posted 1 week ago
6.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Role Description This is a full-time role for an Application Architect . The ideal candidate will have strong experience in Python as the primary backend language , along with familiarity in Java or Node.js as a plus. The role demands end-to-end technical ownership of modern web applications , including frontend, backend, cloud infrastructure, CI/CD, and container orchestration. The Application Architect will lead the architectural design of scalable, secure, and maintainable applications while mentoring teams across all layers of the stack. Key Responsibilities Application Architecture & Design (Full Stack) Own the overall architecture of web applications from frontend to backend and infrastructure. Translate functional and non-functional requirements into scalable, maintainable, and secure designs. Define and enforce architectural principles, design patterns, and technology standards. Backend Architecture (Python-Focused) Lead the design and development of high-performance backend systems using Python (e.g., FastAPI, Django, Flask). Implement RESTful APIs and microservices with focus on modularity, testability, and scalability. Guide backend teams on system design, API integration, and security practices. Nice to have : Working knowledge of Java (Spring Boot) and Node.js (Express/NestJS) for cross-functional collaboration. Frontend Architecture (React / Angular / Next.js) Provide technical oversight of frontend architecture using React, Angular, or Next.js. Collaborate with design teams to implement responsive and accessible UIs. Ensure frontend best practices including performance optimization, component reuse, and security. Cloud Architecture (AWS / Azure / GCP) Design and deploy cloud-native applications with AWS, Azure, or Google Cloud. Define cloud service usage patterns for APIs, databases, messaging, secrets, and monitoring. Promote cloud cost efficiency, high availability, and auto-scaling strategies. Containerization & Orchestration (Docker / Kubernetes) Drive adoption of Docker-based containerization and Kubernetes for scalable deployments. Design and maintain Helm charts, service discovery, and load balancing configurations. Define infrastructure standards and environments using IaC (Terraform, Pulumi). CI/CD & DevOps Enablement Architect and maintain CI/CD pipelines with GitHub Actions, Jenkins, GitLab CI, or Azure DevOps. Implement automated testing, linting, security scanning, and release orchestration. Collaborate with DevOps and SRE teams for deployment automation and incident response readiness. Technical Leadership and Mentorship Mentor and lead engineers across the backend, frontend, and DevOps functions. Review designs and code to ensure adherence to architecture, scalability, and security goals. Foster a culture of clean code, continuous learning, and technical excellence. Performance, Observability & Security Ensure applications are secure-by-design and meet regulatory or organizational standards. Drive system observability through logging, metrics, tracing, and alerting. Support performance tuning, load testing, and capacity planning activities. Qualifications Proven experience as an Application Architect , Solution Architect , or Lead Backend/Full Stack Developer . Strong hands-on experience with Python as the primary backend language (FastAPI, Django, Flask). Nice to have : Familiarity with Java (Spring Boot) and Node.js environments. Solid understanding of frontend technologies – React, Angular, or Next.js – and browser-based architecture principles. Deep expertise in RESTful API design , microservices architecture, and distributed systems. Cloud expertise in AWS, Azure, or GCP , with hands-on experience in building secure and scalable cloud applications. Experience with Docker , Kubernetes , and container orchestration at scale. Hands-on with CI/CD pipelines , infrastructure automation, and DevOps practices. Working knowledge of relational and NoSQL databases (PostgreSQL, MySQL, MongoDB), caching (Redis), and event/message-driven systems (Kafka, SQS, RabbitMQ). Strong leadership, problem-solving, and cross-functional communication skills. Education and Experience Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 6+ years of software development experience, with 2+ years in an architectural or technical leadership role. Prior experience designing and scaling production-grade applications in cloud-native environments.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As part of a Global team at The StoneX Group, you will be responsible for enhancing and growing the use of Murex Worldwide. Your main focus will be on developing and maintaining the Murex system for both internal and external users. In this role, you will work as a Murex Workflow/Integration Developer, where you will create, enhance, and maintain Murex post-trade workflows and OSP screens. You will be expected to work independently to analyze issues and provide solutions, proactively identifying improvements to make the system more resilient. Your responsibilities will also include the initial diagnosis of problems, applying known solutions, documenting issues, progress checking, and escalating when necessary to ensure prompt resolution. To excel in this position, you should possess 5-7 years of Murex workflow experience, a solid understanding of the Murex financial schema, good communication skills, intermediate/advanced knowledge of SQL, and the ability to adapt quickly and solve problems with an analytical mindset towards design and debugging. It would be advantageous to have experience in creating Workflow tasks with Java, hands-on knowledge of pre-trade workflows/rules, Murex EOD debugging experience, Murex upgrade experience, Murex datamart experience, knowledge of Murex services and logging, and familiarity with tools such as Jira and ServiceNow. Joining The StoneX Group will offer you a unique opportunity to be part of a Fortune-100, Nasdaq-listed provider that connects clients to the global markets, focusing on innovation, human connection, and world-class products and services. With endless potential for progression and growth in the corporate segment, you will engage in various business-critical activities that contribute to the efficient operation of the company. From strategic marketing and financial management to human resources and operational oversight, you will have the chance to optimize processes and implement game-changing policies. Whether you are interested in connecting retail clients to trading opportunities or delving into the world of institutional investing, The StoneX Group provides a dynamic environment that supports your career aspirations.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
amreli, gujarat
On-site
As an Engineering Manager at Diagnal, a leading technology company in the digital entertainment industry, you will be responsible for crafting exceptional software products and leading high-performing software development teams. Your role will involve demonstrating a genuine passion for both constructing and guiding software development teams, thriving in a fast-paced environment, ensuring timely delivery of high-quality digital entertainment products, and collaborating seamlessly with cross-functional teams. You will champion the engineering of software solutions for long-term performance, maintainability, and quality over shortcuts. Additionally, you will contribute insights to the product roadmap, enhance development methodologies, and proactively address potential organizational, procedural, and technical challenges. To be successful in this role, you should have a proven track record in recruiting, motivating, managing, and retaining high-performing software engineering teams. You should possess a comprehensive understanding of quality processes, hands-on development experience, proficiency in React, Kotlin, and optionally Swift, and a strong grasp of Agile software development practices. Experience with Git-based source code management, Continuous Integration, JIRA configuration, contemporary internet technology stacks, and best practices in integration, security, scalability, and performance optimization is essential. You should have over 3 years of experience managing software engineering teams, over 6 years of hands-on development experience, and an engineering degree in computer science or equivalent practical experience. If you are self-motivated, adept at time management, and ready to spearhead innovation, drive excellence, and make a significant impact in the digital entertainment industry, we invite you to join our team at Diagnal. Apply now to be a part of our dynamic and talented community.,
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... DevOps Lead - GCP Job Date: Jun 25, 2025 Job Requisition Id: 61687 Location: Hyderabad, TG, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire GCP Professionals in the following areas : Experience 5-8 Years Job Description Implement release process through automation Training, documentation and creating solution library Creating and delivering solutions with a team and using best practices Lead the Team in DevOps assessment and implementation for different projects. Understanding of different cloud technologies (AWS, Azure, GCP) and its services Able to implement Infrastructure automation process and tools/technologies in cloud/on-premise environments Leading end-to-end implementation of projects (including solution architecture and implementation planning) Prepare best suitable solution for clients in different project needs basis multiple tools/technologies requirement. Providing meaningful solutions and implementation of different automation processes around Infrastructure, application release and monitoring Deep understanding of different DevOps processes and its integration with multiple different solutions of cloud native and cloud agnostic services/tools/technologies and creating best practices to implement it in different environments Required Technical/ Functional Competencies Domain/Industry Knowledge: Specialized knowledge of client’s business processes and basic knowledge of technology, platform, product & DevOps Processes. Prepare process maps/workflows/business cases, medium to complex models, apply industry standards & analyse current-state, define to-be processes Requirement Gathering And Analysis: Design a demo system to demonstrate, extract functional/non-functional requirements & document it & system/software specification in complex scenarios Analyse the impact of change requested/enhancement/defect fix/conduct technology/business gap analysis and identify gaps in transition requirements, identify modules impacted, features/functionalities arrive at high level estimates/develop traceability matrix. Platform/ Technology Knowledge: Specialized knowledge of implementation on product/platform standards and technologies. Implement processes or configure/customize products and provide inputs in design and architecture and drive adoption of industry standards and best practices. Adhere standard processes (CI/CD), scenarios, documents of low-level design. Analyse/review various frameworks/tools, handle medium to complex modules. Infrastructure Management: Specialized knowledge/develop infra process for on-premise and cloud, follow its automation standards and best practice. Able to develop/execute infra automation scripts & creates/verify centralized infra-as-code process & plan/develop/conduct test cases, analyses results & impact, identify root cause for issues. Application Build, Deployment, Testing & Security Automation Process through CI/CD, Continuous Testing and Continuous Security: Specialized knowledge of application build, deployment, testing, security automation principles and frameworks. Create version control strategy for code & automation, understanding of usage of tools/technologies for CI/CD/CT/CS processes Create pipelines for application build, deployment and integration of testing & security into pipelines and create monitoring dashboard to examine build and deployment matrix. Create security tasks and integrate. Architecture Tools And Frameworks: Specialized knowledge of architecture tools & frameworks. Implement tools & framework in a complex scenario/conduct tools/customization/tailoring workshop Customer Management: Specialized knowledge of customer’s business domain, technology, and principles. Use the latest technology, build it into client engagement and understand the customer business and pro-actively suggest solutions which leads to additional business. Operations Management Including Monitoring And Logging: Specialized knowledge of infrastructure & application operations including monitoring, logging/reporting through dashboards, automation principles and frameworks for monitoring and logging solutions, design automation process of implementation Create matrices for infrastructure and application monitoring. Change & Release Management: Specialized knowledge of change record (CR) tool, change management activities and their impacts. Process steps for submission/review of change records, deployment, post-implementation review, all the elements of a CAB and ECAB and release management activities Able to maintain mandatory change book to reflect/identify changes/changes/review records/classify as low/medium/high risk and authorize the progression of change records Required Behavioral Competencies Accountability: Takes responsibility for and ensures accuracy of own work, as well as the work and deadlines of the team. Collaboration: Shares information within team, participates in team activities, asks questions to understand other points of view. Agility: Demonstrates readiness for change, asking questions and determining how changes could impact own work. Customer Focus: Identifies trends and patterns emerging from customer preferences and works towards customizing/ refining existing services to exceed customer needs and expectations. Communication: Targets communications for the appropriate audience, clearly articulating and presenting his/her position or decision. Drives Results: Sets realistic stretch goals for self & others to achieve and exceed defined goals/targets. Resolves Conflict: Displays sensitivity in interactions and strives to understand others’ views and concerns. Certifications Mandatory At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved.
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are looking for a DevOps Engineer who thrives at the intersection of infrastructure, automation, and operational excellence. You will be responsible for maintaining and scaling our CI/CD pipelines, provisioning cloud environments, and enabling rapid, secure, and reliable deployment of software for our AI and edge platform ecosystem. This role is ideal for someone who enjoys solving real-world system challenges and collaborating with cross-functional teams to support fast-moving development cycles, platform stability, and deployment efficiency. Key Responsibilities Infrastructure Automation & CI/CD Build and maintain CI/CD pipelines to support development, testing, and deployment workflows. Automate infrastructure provisioning using tools like Terraform, Ansible, or Pulumi. Set up scalable and secure cloud environments using platforms such as AWS, GCP, or Azure. Manage and optimize storage buckets and related access controls across cloud providers. Deployment, Monitoring & Package Management Manage Dockerized applications, Kubernetes clusters, and container orchestration systems. Monitor system health, performance metrics, and logging pipelines using tools like Prometheus, Grafana, ELK, or similar. Handle internal APT/YUM repositories for distributing infrastructure and application packages. Deploy and maintain the latest security and system utility packages across environments. Respond to system alerts and incidents, ensuring platform availability and uptime. Security & Compliance Implement best practices for DevSecOps including role-based access control, secrets management, and secure pipelines. Maintain audit logs, backup strategies, and disaster recovery plans for critical systems. Collaboration & Support Work closely with software engineers to streamline deployment processes and improve release velocity. Support development, QA, and product teams with test environments, release workflows, and performance optimization. Contribute to internal documentation and playbooks for deployments, upgrades, and environment management. Required Skills & Qualifications: Educational Background Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Professional Experience 2–4 years of hands-on DevOps experience in managing cloud infrastructure and CI/CD. Proven experience with containerization, orchestration, and cloud deployments in real-world environments. Technical Proficiency Proficiency with infrastructure as code (IaC) tools such as Terraform, Ansible, or CloudFormation. Experience with CI/CD tools like GitHub Actions, GitLab CI, Jenkins, or CircleCI. Strong understanding of Docker, Kubernetes, and service orchestration patterns. Familiarity with cloud environments (AWS/GCP/Azure), storage buckets, and networking fundamentals. Experience with observability tools such as Prometheus, Grafana, Loki, or ELK. Experience managing APT/YUM package repositories and system-level deployments. Bonus Points Exposure to edge computing environments or embedded system deployments. Knowledge of VPN setup, certificate management, and secure provisioning workflows. Familiarity with scripting languages (Bash, Python) for automation tasks. Contact Information To apply, please send your resume and portfolio details to hire@condor-ai.com with “Application: DevOps Engineer” in the subject line. About Condor AI Condor is an AI engineering company where we use artificial intelligence models to deploy solutions in the real world. Our core strength lies in Edge AI, combining custom hardware with optimized software for fast, reliable, on device intelligence. We work across smart cities, industrial automation, logistics, and security, with a team that brings over a decade of experience in AI, embedded systems, and enterprise grade solutions. We operate lean, think globally, and build for production from system design to scaled deployment.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Vectra® is the leader in AI-driven threat detection and response for hybrid and multi-cloud enterprises. The Vectra AI Platform delivers integrated signal across public cloud, SaaS, identity, and data center networks in a single platform. Powered by patented Attack Signal Intelligence, it empowers security teams to rapidly prioritize, investigate and respond to the most advanced cyber-attacks. With 35 patents in AI-driven threat detection and the most vendor references in MITRE D3FEND, organizations worldwide rely on the Vectra AI to move at the speed and scale of hybrid attackers. For more information, visit www.vectra.ai. Position Overview We are seeking an experienced Threat Detection Engineer to extend Vectra's detection capabilities in partnership with Data Scientists and Security Researchers who are developing our AI-driven Attack Signal. Vectra's Attack Signal Production Group is responsible for building Vectra's core threat detection and prioritization technology, leveraging AI and other methods to alert customers to critical threats in their network and cloud environments. Threat Detection Engineers work closely with Data Scientists who are developing AI models, and Security Researchers who are researching the threat landscape and assisting modeling efforts. Detection Engineers focused on Network attack behaviors complement Vectra's coverage by building Suricata signatures, specifying detection logic in python, and utilizing other available methods. Responsibilities and Accountabilities: Analyze network traffic to identify and document threat patterns. Develop and maintain network-based security signatures in Suricata. Use offensive security tools and techniques to simulate attacks and generate sample network traffic. Collaborate with data scientists and security researchers to support detection efforts and improve detection accuracy. Continuously monitor and assess the effectiveness of network detections, making adjustments as needed. Contribute to threat hunting efforts by identifying new tactics, techniques, and procedures (TTPs) used by attackers. Participate in incident response activities as required. Attitudes and Behaviors: Focus on impact and results; work on the right things and get them done Drive and resourcefulness to persevere and overcome obstacles achieving challenging goals Track record of successfully solving complex and ambiguous problems High integrity and ability to positively collaborate with others Qualifications and Experience 5+ years of cybersecurity experience (preferably focused on threat detection and response) Expertise in writing signatures with Suricata Excellent people, technical and communication skills, and the ability to work collaboratively in a team environment. Advanced knowledge of common operating systems, services, networking protocols, logging, cloud and SaaS environments Knowledge of attacker techniques and tools (e.g., Metasploit, Cobalt Strike), and prior operational experience leveraging threat intelligence to detect and respond to adversaries Familiarity with data utilized by detection technology, for example PCAPs, flow logs, cloud logs, etc. Proficiency with related languages and frameworks, e.g. bash, python, Sigma, YARA-L, Linux/Unix, Wireshark, etc. Scripting, software development, engineering, and/or devops experience; experience with a source control system, preferably Git Optional certifications - OSCP, GCIA, GCDA, GSEC Vectra provides a comprehensive total rewards package that supports the financial, physical, mental and overall health of our employees and their families. Compensation includes competitive base pay, incentive plan eligibility, and participation in the employee equity plan (stock options). Specific benefits offered varies by location, but commonly include health care insurance, income protection / life insurance, access to retirement savings plans, behavioral & emotional wellness services, generous time away from work, and a comprehensive employee recognition program. Vectra is committed to creating a diverse environment and is proud to be an equal opportunity employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status.
Posted 1 week ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Description Oracle Health is focused on delivering software solutions to help the world’s largest pharmaceutical companies positively impact people’s lives by supporting the cost-effective development of treatments for today’s most challenging health related issues. We are seeking an experienced and driven Software Developer who thrives at the intersection of Java development and cloud-native DevOps practices. In this role, you will play a critical part in building and maintaining Java-based microservices designed to simplify application management, streamline the integration of new features, and enhance operational efficiency. Additionally, you will be deeply involved in deployment automation, infrastructure planning and management, and providing operational support for a large enterprise-class application deployed across Oracle Cloud Infrastructure (OCI). This unique position offers the opportunity to make significant contributions throughout the entire software lifecycle, directly improving products that positively impact people worldwide. Responsibilities Responsibilities: Collaborate closely with product teams to define and implement onboarding, deployment, and monitoring requirements for new services, features, and integrations. Design, develop, and maintain Java-based microservices that support application management, configuration, and DevOps automation within a robust, cloud-native platform. Partner with infrastructure and application management teams to streamline deployment pipelines, manage containerized workloads, and optimize application performance in Oracle Cloud. Plan and coordinate phased and multi-realm deployments, ensuring tasks are clearly assigned, managed, and successfully executed. Develop and maintain Unix/Linux shell scripts to automate operational tasks and streamline deployment processes. Proactively monitor, manage, and troubleshoot cloud infrastructure components utilizing advanced logging and monitoring frameworks. Assist in Kubernetes configuration, management, and deployment strategies. Provide support and guidance to internal product teams, ensuring consistent application environment stability and performance. Uphold high standards for application reliability, security, and performance in production cloud environments. Required Qualifications: 3–5 years of hands-on Java development experience. Experience designing web-based applications using front-end frameworks and RESTful microservices. Experience leveraging GenAI and Agents in workflows to increase productivity Experience working with containerized application environments such as Docker and Kubernetes. Understanding of basic networking concepts: TCP/IP, SSL/TLS, VPN, Load Balancing, DNS, routing, and SSH. Working knowledge of Unix/Linux systems and proficiency in shell scripting. Experience with source code management tools such as Git, Bitbucket, or GitHub. Flexibility to occasionally work alternative schedules, including after-hours or weekend shifts, to support scheduled product releases/upgrades. Preferred Qualifications: Strong knowledge of cloud computing principles (compute, storage, networking, database services) Previous experience using popular cloud providers (Oracle Cloud, AWS, etc.) Experience with monitoring and troubleshooting tools like Prometheus, Grafana, cURL, and Wireshark. Understanding of cloud cost optimization techniques and management tools. Knowledge of modern authentication methods (OAuth, JWT) and security best practices in distributed systems. Experience with Oracle RDBMS and SQL. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description We are seeking a highly skilled Senior Data Engineer with strong expertise in Python development and MySQL, along with hands-on experience in Big Data technologies, PySpark, and cloud platforms such as AWS, GCP, or Azure. The ideal candidate will play a critical role in designing and developing scalable data pipelines and infrastructure to support advanced analytics and data-driven decision-making across teams. Requirements 7 to 12 years of overall experience in data engineering or related domains. Proven ability to work independently on analytics engines like Big Data and PySpark. Strong hands-on experience in Python programming, with a focus on data handling and backend services. Proficiency in MySQL, with the ability to write and optimize complex queries; knowledge of Redis is a plus. Solid understanding and hands-on experience with public cloud services (AWS, GCP, or Azure). Familiarity with monitoring tools such as Grafana, ELK, Loki, and Prometheus. Experience with IaC tools like Terraform and Helm. Proficiency in containerization and orchestration using Docker and Kubernetes. Strong collaboration and communication skills to work in agile and cross-functional environments. Job responsibilities Design, develop, and maintain robust data pipelines using Big Data and PySpark for ETL/ELT processes. Build scalable and efficient data solutions across cloud platforms (AWS/GCP/Azure) using modern tools and technologies Write high-quality, maintainable, and efficient code in Python for data engineering tasks. Develop and optimize complex queries using MySQL and work with caching systems like Redis. Implement monitoring and logging using Grafana, ELK, Loki, and Prometheus to ensure system reliability and performance. Use Terraform and Helm for infrastructure provisioning and automation (Infrastructure as Code). Leverage Docker and Kubernetes for containerization and orchestration of services. Collaborate with cross-functional teams including engineering, product, and analytics to deliver impactful data solutions. Contribute to system architecture decisions and influence best practices in cloud data infrastructure. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Oracle Health is focused on delivering software solutions to help the world’s largest pharmaceutical companies positively impact people’s lives by supporting the cost-effective development of treatments for today’s most challenging health related issues. We are seeking an experienced and driven Software Developer who thrives at the intersection of Java development and cloud-native DevOps practices. In this role, you will play a critical part in building and maintaining Java-based microservices designed to simplify application management, streamline the integration of new features, and enhance operational efficiency. Additionally, you will be deeply involved in deployment automation, infrastructure planning and management, and providing operational support for a large enterprise-class application deployed across Oracle Cloud Infrastructure (OCI). This unique position offers the opportunity to make significant contributions throughout the entire software lifecycle, directly improving products that positively impact people worldwide. Responsibilities Responsibilities: Collaborate closely with product teams to define and implement onboarding, deployment, and monitoring requirements for new services, features, and integrations. Design, develop, and maintain Java-based microservices that support application management, configuration, and DevOps automation within a robust, cloud-native platform. Partner with infrastructure and application management teams to streamline deployment pipelines, manage containerized workloads, and optimize application performance in Oracle Cloud. Plan and coordinate phased and multi-realm deployments, ensuring tasks are clearly assigned, managed, and successfully executed. Develop and maintain Unix/Linux shell scripts to automate operational tasks and streamline deployment processes. Proactively monitor, manage, and troubleshoot cloud infrastructure components utilizing advanced logging and monitoring frameworks. Assist in Kubernetes configuration, management, and deployment strategies. Provide support and guidance to internal product teams, ensuring consistent application environment stability and performance. Uphold high standards for application reliability, security, and performance in production cloud environments. Required Qualifications: 3–5 years of hands-on Java development experience. Experience designing web-based applications using front-end frameworks and RESTful microservices. Experience leveraging GenAI and Agents in workflows to increase productivity Experience working with containerized application environments such as Docker and Kubernetes. Understanding of basic networking concepts: TCP/IP, SSL/TLS, VPN, Load Balancing, DNS, routing, and SSH. Working knowledge of Unix/Linux systems and proficiency in shell scripting. Experience with source code management tools such as Git, Bitbucket, or GitHub. Flexibility to occasionally work alternative schedules, including after-hours or weekend shifts, to support scheduled product releases/upgrades. Preferred Qualifications: Strong knowledge of cloud computing principles (compute, storage, networking, database services) Previous experience using popular cloud providers (Oracle Cloud, AWS, etc.) Experience with monitoring and troubleshooting tools like Prometheus, Grafana, cURL, and Wireshark. Understanding of cloud cost optimization techniques and management tools. Knowledge of modern authentication methods (OAuth, JWT) and security best practices in distributed systems. Experience with Oracle RDBMS and SQL. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
7.0 - 12.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description We are seeking a highly skilled Senior Data Engineer with strong expertise in Python development and MySQL, along with hands-on experience in Big Data technologies, PySpark, and cloud platforms such as AWS, GCP, or Azure. The ideal candidate will play a critical role in designing and developing scalable data pipelines and infrastructure to support advanced analytics and data-driven decision-making across teams. Requirements 7 to 12 years of overall experience in data engineering or related domains. Proven ability to work independently on analytics engines like Big Data and PySpark. Strong hands-on experience in Python programming, with a focus on data handling and backend services. Proficiency in MySQL, with the ability to write and optimize complex queries; knowledge of Redis is a plus. Solid understanding and hands-on experience with public cloud services (AWS, GCP, or Azure). Familiarity with monitoring tools such as Grafana, ELK, Loki, and Prometheus. Experience with IaC tools like Terraform and Helm. Proficiency in containerization and orchestration using Docker and Kubernetes. Strong collaboration and communication skills to work in agile and cross-functional environments. Job responsibilities Design, develop, and maintain robust data pipelines using Big Data and PySpark for ETL/ELT processes. Build scalable and efficient data solutions across cloud platforms (AWS/GCP/Azure) using modern tools and technologies Write high-quality, maintainable, and efficient code in Python for data engineering tasks. Develop and optimize complex queries using MySQL and work with caching systems like Redis. Implement monitoring and logging using Grafana, ELK, Loki, and Prometheus to ensure system reliability and performance. Use Terraform and Helm for infrastructure provisioning and automation (Infrastructure as Code). Leverage Docker and Kubernetes for containerization and orchestration of services. Collaborate with cross-functional teams including engineering, product, and analytics to deliver impactful data solutions. Contribute to system architecture decisions and influence best practices in cloud data infrastructure. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.
Posted 1 week ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Oracle Health is focused on delivering software solutions to help the world’s largest pharmaceutical companies positively impact people’s lives by supporting the cost-effective development of treatments for today’s most challenging health related issues. We are seeking an experienced and driven Software Developer who thrives at the intersection of Java development and cloud-native DevOps practices. In this role, you will play a critical part in building and maintaining Java-based microservices designed to simplify application management, streamline the integration of new features, and enhance operational efficiency. Additionally, you will be deeply involved in deployment automation, infrastructure planning and management, and providing operational support for a large enterprise-class application deployed across Oracle Cloud Infrastructure (OCI). This unique position offers the opportunity to make significant contributions throughout the entire software lifecycle, directly improving products that positively impact people worldwide. Responsibilities Responsibilities: Collaborate closely with product teams to define and implement onboarding, deployment, and monitoring requirements for new services, features, and integrations. Design, develop, and maintain Java-based microservices that support application management, configuration, and DevOps automation within a robust, cloud-native platform. Partner with infrastructure and application management teams to streamline deployment pipelines, manage containerized workloads, and optimize application performance in Oracle Cloud. Plan and coordinate phased and multi-realm deployments, ensuring tasks are clearly assigned, managed, and successfully executed. Develop and maintain Unix/Linux shell scripts to automate operational tasks and streamline deployment processes. Proactively monitor, manage, and troubleshoot cloud infrastructure components utilizing advanced logging and monitoring frameworks. Assist in Kubernetes configuration, management, and deployment strategies. Provide support and guidance to internal product teams, ensuring consistent application environment stability and performance. Uphold high standards for application reliability, security, and performance in production cloud environments. Required Qualifications: 3–5 years of hands-on Java development experience. Experience designing web-based applications using front-end frameworks and RESTful microservices. Experience leveraging GenAI and Agents in workflows to increase productivity Experience working with containerized application environments such as Docker and Kubernetes. Understanding of basic networking concepts: TCP/IP, SSL/TLS, VPN, Load Balancing, DNS, routing, and SSH. Working knowledge of Unix/Linux systems and proficiency in shell scripting. Experience with source code management tools such as Git, Bitbucket, or GitHub. Flexibility to occasionally work alternative schedules, including after-hours or weekend shifts, to support scheduled product releases/upgrades. Preferred Qualifications: Strong knowledge of cloud computing principles (compute, storage, networking, database services) Previous experience using popular cloud providers (Oracle Cloud, AWS, etc.) Experience with monitoring and troubleshooting tools like Prometheus, Grafana, cURL, and Wireshark. Understanding of cloud cost optimization techniques and management tools. Knowledge of modern authentication methods (OAuth, JWT) and security best practices in distributed systems. Experience with Oracle RDBMS and SQL. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
Posted 1 week ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Node.js Node.JS Developer Join Hutech Solutions - Innovate, Lead, and Transform! We are a global AI-driven software services and product engineering powerhouse, founded and led by visionary technology leaders from Walmart. We are redefining the future of technology by building next-gen solutions that empower businesses across Banking, Finance, eCommerce, and Logistics industries. At Hutech, we don’t just build softwarewe create impact. Our culture fosters innovation, creativity, and continuous learning, enabling our team to push boundaries and solve real-world challenges using cutting-edge AI tools and techniques. Why Join Hutech? Be a part of a high-impact, innovation-driven team. Work on global-scale projects with industry-leading clients. Leverage the latest AI & emerging technologies to build groundbreaking solutions. Endless opportunities for career growth, mentorship, and leadership. A culture that values ideas, collaboration, and work-life balance. If you’re a passionate problem-solver, AI enthusiast, or technology disruptor, Hutech Solutions is the place for you. Join us and be a part of the future of technology! Job Summary Minimum 4 years of experience in Developing APIs and RESTful services using Node JS. Experience with AWS API Gateway, Lambda Functions. Strong understanding of usage and implementation of JWT tokens & access control API Development. Design experience using Node, Express, REST. Produce high quality code and experience with security implementations Identifying application security risks and implementing security patches procedures. Implement and Improve application logging services Work with the product and design teams to understand end-user requirements, formulate definitions of done, and translate that into an effective technical solution. Humantech Solutions India Pvt. Ltd., Bengaluru, Karnataka 560066 Ph: +91 80739 89712,corporate@hutechsolutions.com, www.hutechsolutions.com An ISO 9001 : 2015 and ISO 27001:2022 Certified Company Work with the QA Team to develop testing protocols to identify and correct challenges. Must have good analytical, debugging and problem-solving skills. Good communication skills. Responsibilities and Duties As a Node.js developer, you are expected to deliver the API’s assigned. Primarily you would be working on Node JS, JavaScript, MongoDB, MySQL, PostgreSQL Key Skills Node JS, JavaScript, MongoDB, MySQL, PostgreSQL, GitHub Notice Period: Immediate Job Location : Bangalore, work from office Humantech Solutions India Pvt. Ltd., Bengaluru, Karnataka 560066 Ph: +91 80739 89712,corporate@hutechsolutions.com, www.hutechsolutions.com An ISO 9001 : 2015 and ISO 27001:2022 Certified Company
Posted 1 week ago
9.0 - 12.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Role Description: Hands-on Full stack Java Lead developer with extensive experience developing Microservices based API leveraging containerized deployment stack who will take the overall responsibility for end to end software development, continuous integration and continuous deployment, meeting a high level of code quality working within established timelines and Engineering Excellence best practices. The ideal candidate will be dependable and resourceful software professional who can comfortably work in a large development team in a globally distributed, dynamic work environment that fosters diversity, teamwork and collaboration. The ability to work in high pressured environment is essential. Technical / Functional Proficiency: Overall, 9 to 12 years of total experience in technology Application development hands-on experience in Core Java, Hibernate, Struts, Spring, Springboot, Angular and related Java technologies. Previous experience of micro services application design and led the team of 6 to 8 engineers Hands-on experience in setting up CICD pipeline and various aspect of GIT workflow Use and proficiency in container-based deployment stack technologies such as Docker, OpenShift and Kubernetes or similar platform. Use of an API specification such as Swagger, RAML. Experience in distributed systems architecture, specifically designing micro services, event gateways, eventual data consistency as well as event stream logging and tracing. Experience with RESTful API development. Experience with version control, e.g. GIT, issue/problem tracking through Jira, team collaboration software e.g. TeamCity and continuous integration environments e.g. uDeploy. Clear understanding various design patterns and leveraging the same to solve complex technical problems. Understanding working in Scrum Team and various Scrum ceremonies Clear understanding of Scalable and highly available systems Responsibilities and Other skills: Proven ability in working with the development team members and other partners, with minimal supervision Strong verbal and written communications skills, excellent interpersonal skills with ability to communicate well at all levels Team Player, self-starter and thorough who is willing to take on any assigned job/responsibilities Mentor and coach junior members in team Ability to learn new skills quickly with little supervision and ensuring the detail is of high priority Efficiently and effectively manages work, time, and resources. Ability to work under high-pressure situations and effectively prioritize in a highly dynamic work environment that includes a global focus. Strong problem solving and program execution skills while being process orientated Ability to understand the big picture – can step back and understand the context of problems before applying analytical skills to address the issues. Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Education: Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills Core Java, Microservice Framework. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
8.5 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Your Impact: You will work in the spirit of agile & a product engineering mindset - delivering the sprint outcomes, iteratively & incrementally, following the agile ceremonies You’re expected to write clean,modular,production ready code and take it through production and post-production lifecycle. You will groom the stories functionally & help define the acceptance criteria (Functional & Non-Functional/NFRs) You will have breadth of concepts, tools & technologies to address NFRs like security, performance, reliability, maintainability and understand the need for trade-offs You will bring in expertise to optimize and make the relevant design decisions (considering trade-offs) at the module / components level Manage the product lifecycle from requirements gathering and feasibility analysis through high-level and low-level design, development, user acceptance testing (UAT), and staging deployment. Integrate SAST,DAST in detecting OWASP vulnerabilities, thereby securing a robust and scalable product journey roadmap Qualifications Your Skills & Experience: A Bachelor’s degree in engineering with 8.5+ years of experience in building large-scale, large-volume services & distributed apps. Proficiency in Java, Spring/Springboot/Micronaut framework,NodeJs,React,K8(Container Orchestrator),Message Queues(Kafka/ActiveMQ,Rabbit MQ/Tibco/JMS) You are aware of Multi-Cloud Platforms like AWS, GCP, Azure, etc. You apply SOLID, DRY design principles, design patterns & practice Clean Code You are an expert at String Manipulation, Data/Time Arithmetic, Collections & Generics You build reliable & high-performance apps leveraging Eventing, Streaming, Concurrency, You design and build microservices from the ground up, considering all NFRs & applying DDD, Bounded Contexts You use one or more databases (RDBMS or NoSQL) based on the needs You understand the significance of security aspects & compliance to data, code & application security policies; You write secure code to prevent known vulnerabilities. You understand HTTPS/TLS, Symmetric/Asymmetric Cryptography, CertificatesYou use logging frameworks like Log4j, NLog, etc. You use Logging/Monitoring solutions (Splunk, ELK, Grafana) Set Yourself Apart With You understand infra. as code (cattle over pets via Terraform/Cloud-Formation/Ansible) You understand reactive programming concepts, Actor models & use RX Java / Spring React / Akka / Play, etc. You are aware of distributed tracing, debugging, and troubleshooting You are aware of side-car, service mesh usage along with microservices You are aware of gateways, load-balancers, CDNs, Edge caching You are aware of gherkin and cucumber for BDD automation You are aware of one distributed caching solution like Redis, MemCache, etc. A Tip From The Hiring Manager Software Development Engineers (SDE-2) are bright, talented, and motivated young minds with strong technical skills, developing software applications and services that make life easier for customers. The SDE-2 is expected to work with an agile team to develop, test, and maintain digital business applications. Additional Information Gender Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well-being. Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting, and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of the next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
We are looking for a skilled and proactive Cloud Security Engineer to join our dynamic team at Grid Dynamics. This role is focused on ensuring the security and compliance of our public cloud infrastructure across AWS and GCP environments. You will be instrumental in designing, implementing, and monitoring cloud security solutions, working closely with IT, engineering, and external SOC partners. This position is open in Hyderabad, Bangalore, and Chennai . This job is centred around the following practical tasks: Public cloud security architecture and compliance Selecting and deploying key native public cloud security tools and enabling the required security features in AWS and GCP Cloud security governance and compliance, including applying relevant security policies and ensuring that our public cloud infrastructure meets industry standard security baselines (e.g. CIS) Working with IT and other Grid Dynamics teams on creating, deploying, and updating cloud security configuration templates/standard builds/etc. Assisting with cloud key management in order to prevent hardcoding (AWS KMS and GCP’s Key Management, HashiCorp Vault etc.) Enabling and configuring cloud web application firewalls such as AWS WAF and Google Cloud Armor Public cloud security monitoring and incident response Assisting with Elastic SIEM roll-out and implementation in both AWS and GCP, enabling and configuring native cloud security monitoring tools (CloudWatch, Google Cloud Logging & Monitoring) to work with Elastic SIEM Threat detection and response in the cloud (AWS GuardDuty, AWS Detective, Google Security Command Center, Chronicle) Cloud data classification and protection (Amazon Macie, Google Data Loss Prevention (DLP) Collaborating with IT and an external SOC provider on incident-related matters Producing cloud alerts and incidents metrics for high level management reports Public cloud security auditing and vulnerability management Conducting regular security assessments and participating in internal audits employing native cloud vulnerability scanning tools (AWS Inspector and Google Security Command Center), as well as compliance checkers (AWS Config, AWS Audit Manager, GCP Policy Intelligence) Assisting the affected systems owners in mitigating the uncovered vulnerabilities and security misconfigurations Assisting developers with utilising SDLC-centric cloud security tools such as AWS CloudGuru, SageMaker Clarify, CodeWhisperer. Producing vulnerability metrics for high level management reports General requirements Where necessary, readiness to respond out of business hours taking into account Grid Dynamics geography Being able to take initiative in solving security problems Self-discipline and consistency in taking care of routine tasks Being collaborative with other security team members, as well as IT and various development/engineering teams, or any users of the affected systems Education & Qualifications Bachelor’s or Master’s degree in Computer Science , Information Security , Engineering , or a related field. Relevant cloud security certifications are highly desirable, such as: AWS Certified Security – Specialty Google Professional Cloud Security Engineer Certified Information Systems Security Professional (CISSP) Certified Cloud Security Professional (CCSP)
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Summary Position Summary Technical Lead – Big Data & Python skillset As a Technical Lead, you will be responsible as a strong full stack developer and individual contributor responsible to design application modules and deliver from the technical standpoint. High level of skills in coming up with high level design working with the architect and lead in module implementations technically. Must be a strong developer and ability to innovative. Should be a go to person on the assigned modules, applications/ projects and initiatives. Maintains appropriate certifications and applies respective skills on project engagements. Work you’ll do A unique opportunity to be a part of growing Delivery, methods & Tools team that drives consistency, quality, and efficiency of the services delivered to stakeholders. Responsibilities: Full stack hands on developer and strong individual contributor. Go-to person on the assigned projects. Able to understand and implement the project as per the proposed Architecture. Implements best Design Principles and Patterns. Understands and implements the security aspects of the application. Knows ADO and is familiar with using ADO. Obtains/maintains appropriate certifications and applies respective skills on project engagements. Leads or contributes significantly to Practice. Estimates and prioritizes Product Backlogs. Defines work items. Works on unit test automation. Recommend improvements to existing software programs as deemed necessary. Go-to person in the team for any technical issues. Conduct Peer Reviews Conducts Tech sessions within Team. Provides input to standards and guidelines. Implements best practices to enable consistency across all projects. Participate in the continuous improvement processes, as assigned. Mentors and coaches Juniors in the Team. Contributes to POCs. Supports the QA team with clarifications/ doubts. Takes ownership of the deployment, Tollgate, and deployment activities. Oversees the development of documentation. Participates in regular work, status communications and stakeholder updates. Supports development of intellectual capital. Contributes to knowledge network. Acts as a technical escalation point. Conducts sprint review. Does code Optimization and suggests team on the best practices. Skills: Education qualification : BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9years ofIT experience in application development , support or maintenance activities 2+ years of experience in team management. Must have in-depth knowledge of software development lifecycles including agile development and testing. Enterprise Data Management framework , data security & Compliance( optional ). Data Ingestion, Storage n Transformation Data Auditing n Validation ( optional ) Data Visualization with Power BI ( optional ) Data Analytics systems ( optional ) Scaling and Handling large data sets. Designing & Building Data Services using At least 2+ years’ in : Azure SQL DB , SQL Wearhouse, ADF , Azure Storage, ADO CI/CD, Azure Synapse Data Model Design Data Entities : modeling and depiction. Metadata Mgmt( optional ). Database development patterns n practices : SQL / NoSQL ( Relation / Non-Relational – native JSON) , flexi schema, indexing practices, Master / child model data mgmt, Columnar , Row API / SDK for No SQL DBs Ops & Mgmt. Design and Implementation of Data warehouse, Azure Synapse, Data Lake, Delta lake Apace Spark Mgmt Programming Languages PySpark / Python , C#( optional ) API : Invoke / Request n Response PowerShell with Azure CLI ( optional ) Git with ADO Repo Mgmt, Branching Strategies Version control Mgmt Rebasing, filtering , cloning , merging Debugging & Perf Tuning n Optimization skills : Ability to analyze PySpark code, PL/SQL, . Enhancing response times GC Mgmt Debugging and Logging n Alerting techniques. Prior experience that demonstrates good business understanding is needed (experience in a professional services organization is a plus). Excellent written and verbal communications, organization, analytical, planning and leadership skills. Strong management, communication, technical and remote collaboration skill are a must. Experience in dealing with multiple projects and cross-functional teams, and ability to coordinate across teams in a large matrix organization environment. Ability to effectively conduct technical discussions directly with Project/Product management, and clients. Excellent team collaboration skills. Education & Experience: Education qualification: BE /B Tech ( IT/CS/Electronics) / MCA / MSc Computer science 6-9 years of Domain experience or other relevant industry experience. 2+ years of Product owner or Business Analyst or System Analysis experience. Minimum 3+ years of Software development experience in .NET projects. 3+ years of experiencing in Agile / scrum methodology Work timings: 9am-4pm, 7pm- 9pm Location: Hyderabad Experience: 6-9 yrs The team At Deloitte, Shared Services center improves overall efficiency and control while giving every business unit access to the company’s best and brightest resources. It is also lets business units focus on what really matters – satisfying customers and developing new products and services to sustain competitive advantage. A shared services center is a simple concept, but making it work is anything but easy. It involves consolidating and standardizing a wildly diverse collection of systems, processes, and functions. And if requires a high degree of cooperation among business units that generally are not accustomed to working together – with people who do not necessarily want to change. USI shared services team provides a wide array of services to the U.S. and it is constantly evaluating and expanding its portfolio. The shared services team provides call center support, Document Services support, financial processing and analysis support, Record management support, Ethics and compliance support and admin assistant support. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities.We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #CAP-PD Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 300914
Posted 1 week ago
8.5 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Your Impact: You will work in the spirit of agile & a product engineering mindset - delivering the sprint outcomes, iteratively & incrementally, following the agile ceremonies You’re expected to write clean,modular,production ready code and take it through production and post-production lifecycle. You will groom the stories functionally & help define the acceptance criteria (Functional & Non-Functional/NFRs) You will have breadth of concepts, tools & technologies to address NFRs like security, performance, reliability, maintainability and understand the need for trade-offs You will bring in expertise to optimize and make the relevant design decisions (considering trade-offs) at the module / components level Manage the product lifecycle from requirements gathering and feasibility analysis through high-level and low-level design, development, user acceptance testing (UAT), and staging deployment. Integrate SAST,DAST in detecting OWASP vulnerabilities, thereby securing a robust and scalable product journey roadmap Qualifications Your Skills & Experience: A Bachelor’s degree in engineering with 8.5+ years of experience in building large-scale, large-volume services & distributed apps. Proficiency in Java, Spring/Springboot/Micronaut framework,NodeJs,React,K8(Container Orchestrator),Message Queues(Kafka/ActiveMQ,Rabbit MQ/Tibco/JMS) You are aware of Multi-Cloud Platforms like AWS, GCP, Azure, etc. You apply SOLID, DRY design principles, design patterns & practice Clean Code You are an expert at String Manipulation, Data/Time Arithmetic, Collections & Generics You build reliable & high-performance apps leveraging Eventing, Streaming, Concurrency, You design and build microservices from the ground up, considering all NFRs & applying DDD, Bounded Contexts You use one or more databases (RDBMS or NoSQL) based on the needs You understand the significance of security aspects & compliance to data, code & application security policies; You write secure code to prevent known vulnerabilities. You understand HTTPS/TLS, Symmetric/Asymmetric Cryptography, CertificatesYou use logging frameworks like Log4j, NLog, etc. You use Logging/Monitoring solutions (Splunk, ELK, Grafana) Set Yourself Apart With You understand infra. as code (cattle over pets via Terraform/Cloud-Formation/Ansible) You understand reactive programming concepts, Actor models & use RX Java / Spring React / Akka / Play, etc. You are aware of distributed tracing, debugging, and troubleshooting You are aware of side-car, service mesh usage along with microservices You are aware of gateways, load-balancers, CDNs, Edge caching You are aware of gherkin and cucumber for BDD automation You are aware of one distributed caching solution like Redis, MemCache, etc. A Tip From The Hiring Manager Software Development Engineers (SDE-2) are bright, talented, and motivated young minds with strong technical skills, developing software applications and services that make life easier for customers. The SDE-2 is expected to work with an agile team to develop, test, and maintain digital business applications. Additional Information Gender Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well-being. Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting, and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of the next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.
Posted 1 week ago
8.5 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Your Impact: You will work in the spirit of agile & a product engineering mindset - delivering the sprint outcomes, iteratively & incrementally, following the agile ceremonies You’re expected to write clean,modular,production ready code and take it through production and post-production lifecycle. You will groom the stories functionally & help define the acceptance criteria (Functional & Non-Functional/NFRs) You will have breadth of concepts, tools & technologies to address NFRs like security, performance, reliability, maintainability and understand the need for trade-offs You will bring in expertise to optimize and make the relevant design decisions (considering trade-offs) at the module / components level Manage the product lifecycle from requirements gathering and feasibility analysis through high-level and low-level design, development, user acceptance testing (UAT), and staging deployment. Integrate SAST,DAST in detecting OWASP vulnerabilities, thereby securing a robust and scalable product journey roadmap Qualifications Your Skills & Experience: A Bachelor’s degree in engineering with 8.5+ years of experience in building large-scale, large-volume services & distributed apps. Proficiency in Java, Spring/Springboot/Micronaut framework,NodeJs,React,K8(Container Orchestrator),Message Queues(Kafka/ActiveMQ,Rabbit MQ/Tibco/JMS) You are aware of Multi-Cloud Platforms like AWS, GCP, Azure, etc. You apply SOLID, DRY design principles, design patterns & practice Clean Code You are an expert at String Manipulation, Data/Time Arithmetic, Collections & Generics You build reliable & high-performance apps leveraging Eventing, Streaming, Concurrency, You design and build microservices from the ground up, considering all NFRs & applying DDD, Bounded Contexts You use one or more databases (RDBMS or NoSQL) based on the needs You understand the significance of security aspects & compliance to data, code & application security policies; You write secure code to prevent known vulnerabilities. You understand HTTPS/TLS, Symmetric/Asymmetric Cryptography, CertificatesYou use logging frameworks like Log4j, NLog, etc. You use Logging/Monitoring solutions (Splunk, ELK, Grafana) Set Yourself Apart With You understand infra. as code (cattle over pets via Terraform/Cloud-Formation/Ansible) You understand reactive programming concepts, Actor models & use RX Java / Spring React / Akka / Play, etc. You are aware of distributed tracing, debugging, and troubleshooting You are aware of side-car, service mesh usage along with microservices You are aware of gateways, load-balancers, CDNs, Edge caching You are aware of gherkin and cucumber for BDD automation You are aware of one distributed caching solution like Redis, MemCache, etc. A Tip From The Hiring Manager Software Development Engineers (SDE-2) are bright, talented, and motivated young minds with strong technical skills, developing software applications and services that make life easier for customers. The SDE-2 is expected to work with an agile team to develop, test, and maintain digital business applications. Additional Information Gender Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well-being. Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting, and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of the next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.
Posted 1 week ago
8.5 years
0 Lacs
Gurgaon, Haryana, India
On-site
Job Description Your Impact: You will work in the spirit of agile & a product engineering mindset - delivering the sprint outcomes, iteratively & incrementally, following the agile ceremonies You’re expected to write clean,modular,production ready code and take it through production and post-production lifecycle. You will groom the stories functionally & help define the acceptance criteria (Functional & Non-Functional/NFRs) You will have breadth of concepts, tools & technologies to address NFRs like security, performance, reliability, maintainability and understand the need for trade-offs You will bring in expertise to optimize and make the relevant design decisions (considering trade-offs) at the module / components level Manage the product lifecycle from requirements gathering and feasibility analysis through high-level and low-level design, development, user acceptance testing (UAT), and staging deployment. Integrate SAST,DAST in detecting OWASP vulnerabilities, thereby securing a robust and scalable product journey roadmap Qualifications Your Skills & Experience: A Bachelor’s degree in engineering with 8.5+ years of experience in building large-scale, large-volume services & distributed apps. Proficiency in Java, Spring/Springboot/Micronaut framework,NodeJs,React,K8(Container Orchestrator),Message Queues(Kafka/ActiveMQ,Rabbit MQ/Tibco/JMS) You are aware of Multi-Cloud Platforms like AWS, GCP, Azure, etc. You apply SOLID, DRY design principles, design patterns & practice Clean Code You are an expert at String Manipulation, Data/Time Arithmetic, Collections & Generics You build reliable & high-performance apps leveraging Eventing, Streaming, Concurrency, You design and build microservices from the ground up, considering all NFRs & applying DDD, Bounded Contexts You use one or more databases (RDBMS or NoSQL) based on the needs You understand the significance of security aspects & compliance to data, code & application security policies; You write secure code to prevent known vulnerabilities. You understand HTTPS/TLS, Symmetric/Asymmetric Cryptography, CertificatesYou use logging frameworks like Log4j, NLog, etc. You use Logging/Monitoring solutions (Splunk, ELK, Grafana) Set Yourself Apart With You understand infra. as code (cattle over pets via Terraform/Cloud-Formation/Ansible) You understand reactive programming concepts, Actor models & use RX Java / Spring React / Akka / Play, etc. You are aware of distributed tracing, debugging, and troubleshooting You are aware of side-car, service mesh usage along with microservices You are aware of gateways, load-balancers, CDNs, Edge caching You are aware of gherkin and cucumber for BDD automation You are aware of one distributed caching solution like Redis, MemCache, etc. A Tip From The Hiring Manager Software Development Engineers (SDE-2) are bright, talented, and motivated young minds with strong technical skills, developing software applications and services that make life easier for customers. The SDE-2 is expected to work with an agile team to develop, test, and maintain digital business applications. Additional Information Gender Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well-being. Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting, and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of the next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France