Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
Who are we and what do we do? BrowserStack is the world’s leading cloud-based software testing platform, empowering over 50,000 customers—including Amazon, Microsoft, Meta, and Google—to deliver high-quality software at speed. Founded in 2011 by Ritesh Arora and Nakul Aggarwal, the company has grown to support more than two million tests daily across 21 global data centers, providing instant access to 35,000+ real devices and browsers. With over 1,200 employees and a remote-first approach, BrowserStack operates at the intersection of scale, reliability, and innovation. Its suite of products spans manual and automated testing, visual regression, accessibility, and test management—all designed to simplify the testing process for modern development teams. Behind the scenes, BrowserStack continues to push the boundaries with AI capabilities like smart test case generation and design, flakiness detection, auto-healing and more —helping teams reduce maintenance overhead, debug faster, and catch issues earlier in the development lifecycle. Recognized for its innovation and growth, BrowserStack has been named to the Forbes Cloud 100 list for four consecutive years. With backing from investors like Accel, Bond, and Insight Partners, the company continues to expand its product offerings and global footprint. Joining BrowserStack means being part of a mission-driven team dedicated to shaping the future of software testing. NOTE : This position is for Mumbai (Remote), please apply only if are from Mumbai or open to relocate to Mumbai. Desired Experience Experience of 1 - 3 years Strong knowledge in: Python and Bash (or similar Unix shell) Working experience with: Ansible, Terraform, Docker, Kubernetes, Prometheus and Cloud platforms like AWS, GCP, Nagios, Jenkins and CI/CD pipelines Good to have: Virtualisation tools like KVM, ESXi Good knowledge of Linux operating systems and networking concepts The drive and self-motivation to understand the intricate details of a complex infrastructure environment. Aggressive problem diagnosis and creative problem-solving skills Startup mentality, high willingness to learn, and hardworking What will you do? Work on AWS Kubernetes to manage our growing fleet of clusters globally Identify areas of improvement in our frameworks, tools, processes and strive to make them better. Evaluate our success metrics and evolve our reporting systems Lead incident response efforts, working closely with cross-functional teams to resolve issues quickly and minimize downtime. Implement effective incident management processes and post-incident reviews Participate in on-call rotation responsibilities, ensuring timely identification and resolution of infrastructure issues Collaborate with the internal team, stakeholders, and partners to implement effective solutions. Provide daily support to customers as they onboard and use our platforms, helping them optimize value, performance, and reliability for their workloads. Contribute to enhancing our platforms' capabilities, prioritizing reliability and scalability. Exhibit strong communication skills and maintain a support-oriented approach when interacting with both technical and non-technical audiences. Benefits In addition to your total compensation, you will be eligible for following benefits, which will be governed by the Company policy: Medical insurance for self, spouse, upto 2 dependent children and Parents or Parents-in-law up to INR 5,00,000 Gratuity as per payment of Gratuity Act, 1972 Unlimited Time Off to ensure our people invest in their wellbeing, to rest and rejuvenate, spend quality time with family and friends Remote-First work environment in India Remote-First Benefit for home office setup, connectivity, accessories, co-working spaces, wellbeing to ensure an amazing remote work experience Show more Show less
Posted 16 hours ago
3.0 years
0 Lacs
Hyderābād
Remote
Your opportunity At New Relic, we provide businesses with a state-of-the-art observability platform, leveraging advanced technologies to deliver real-time insights into the performance of software applications and infrastructure. We enable organizations to monitor, analyze, and optimize their systems to achieve enhanced reliability, performance, and user experience. New Relic is a leader in the industry and has been on the forefront of developing cutting edge AI/ML solutions to revolutionise observability. What you'll do Drive the design, development, and enhancement of core features and functionalities of our AI platform with micro-services architecture and deliver scalable, secure and reliable solutions Be proactive in identifying and addressing performance bottlenecks, applying optimizations, and maintaining the stability and availability of our platform Build thoughtful, high-quality code that is easy to read and maintain Collaborate with your team, external contributors, and others to help solve problems. Write and share proposals to improve team processes and approaches. This role requires Bachelor’s degree in Computer Science discipline or related field 3+ years of experience as a Software Engineer working with Python, developing production grade applications Demonstrated experience in designing, developing, and maintaining large-scale cloud platforms with a strong understanding of scalable distributed systems and microservices architecture Proficiency in back-end frameworks such as Flask/FastAPI; Pydantic for robust models; asyncio, aiohttp libraries for asynchronous request handling; Decorators for abstraction; Pytest for testing Competency in using Python threading and multiprocessing modules for parallel task execution. Knowledge of Coroutines. Understand the GIL and its implications on concurrency Experience in building secure infrastructure having simulated race condition attacks, injection attacks; leading teams through real incident management situations with strong debugging skills Demonstrated experience in working with both Relational and NoSQL DBs; message queueing systems (SQS/Kafka/RabbitMQ) Up to date with cloud technologies - AWS/Azure/GCP, Serverless, Docker, Kubernetes, CI/CD pipelines among others Bonus points if you have Masters in Computer Science discipline Exposure to Machine Learning and GenAI technologies Experience with Authentication/Authorization services etc. Communication protocol - gRPC GraphQL API working knowledge Please note that visa sponsorship is not available for this position. Fostering a diverse, welcoming and inclusive environment is important to us. We work hard to make everyone feel comfortable bringing their best, most authentic selves to work every day. We celebrate our talented Relics’ different backgrounds and abilities, and recognize the different paths they took to reach us – including nontraditional ones. Their experiences and perspectives inspire us to make our products and company the best they can be. We’re looking for people who feel connected to our mission and values, not just candidates who check off all the boxes. If you require a reasonable accommodation to complete any part of the application or recruiting process, please reach out to resume@newrelic.com. We believe in empowering all Relics to achieve professional and business success through a flexible workforce model. This model allows us to work in a variety of workplaces that best support our success, including fully office-based, fully remote, or hybrid. Our hiring process In compliance with applicable law, all persons hired will be required to verify identity and eligibility to work and to complete employment eligibility verification. Note: Our stewardship of the data of thousands of customers’ means that a criminal background check is required to join New Relic. We will consider qualified applicants with arrest and conviction records based on individual circumstances and in accordance with applicable law including, but not limited to, the San Francisco Fair Chance Ordinance. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. New Relic does not accept unsolicited headhunter and agency resumes, and will not pay fees to any third-party agency or company that does not have a signed agreement with New Relic. Candidates are evaluated based on qualifications, regardless of race, religion, ethnicity, national origin, sex, sexual orientation, gender expression or identity, age, disability, neurodiversity, veteran or marital status, political viewpoint, or other legally protected characteristics. Review our Applicant Privacy Notice at https://newrelic.com/termsandconditions/applicant-privacy-policy
Posted 16 hours ago
8.0 - 11.0 years
3 - 7 Lacs
Hyderābād
On-site
Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Requirement: SDET-Java Experience & Band: B3 - 8 to 11 years Job Description: Must Have: Candidates should possess between 8to 10 years of experience in software test automation utilizing Java/J2EE, with a strong understanding of OOP concepts, design patterns, recursive and non-recursive programming, method overloading, method overriding, ArrayList, Vector, serialization, HashSet, and more. A minimum of 3 years of automation experience is mandatory, with overall experience exceeding 5 years. Proven, deep hands-on coding skills with Selenium WebDriver, BDD, and JavaScript are essential, along with experience in framework development. Experience in API automation using RestAssured or Selenium (NOT Postman or SOAPUI) is required. Hands-on experience with CI/CD practices utilizing Maven, GitHub, Jenkins, and TestNG is crucial. Candidates should have experience in continuous test automation using Jenkins or Azure DevOps. Experience in enabling in-sprint automation is also necessary. Familiarity with Automation Life Cycle activities, such as feasibility studies, estimations, proof of concepts, and possessing strong communication skills, is important. Nice to Have: Knowledge of AWS, Kubernetes, Ansible/Chef, Docker, MongoDB, JavaScript, Spring Boot, Nexus/JFrog, etc., is a plus. Familiarity with code quality tools such as Sonar, Jacoco, and SeaLights is advantageous. Exposure to AI/ML concepts and insights on their applicability to testing is beneficial. Experience with non-functional testing, including performance testing (using JMeter), accessibility testing (using JAWS), and visual testing (using Applitools), is desirable. Familiarity with Site Reliability Engineering (SRE) practices is also a plus. Work Location: Pune, Bangalore, Chennai, Hyderabad. Mandatory Skills: SDET. Experience: 5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.
Posted 16 hours ago
8.0 years
0 Lacs
Telangana
On-site
About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Job Description Enterprise Infrastructure Services (EIS) at Chubb is focussed at delivering services across multiple disciplines at Chubb. Cloud Engineering is one of the key services that are responsible for delivering cloud-based services on-prem as well as off prem. As part of the continued transformation, Chubb is increasing the pace of application transformation into containers and cloud adoption. As such we are seeking an experienced Cloud Engineer who can be part of this exciting journey at Chubb. As an experienced, hands-on cloud engineer, you will be responsible for both Infrastructure automation and container platform adoption at Chubb. A successful candidate would have hands-on experience of container platforms (Kubernetes), cloud platforms (Azure), and experience with software development and DevOps enablement through automation and Infrastructure as code. Successful candidate will also have opportunity to build and innovate solutions around various Infrastructure problems right from developer experience to operational excellence across the services provided by the cloud engineering team. Responsibilities Work on the cloud transformation projects across cloud engineering to provide automation and self service Implement automation and self-service capabilities using CI/CD pipelines for Infrastructure Write and maintain Terraform based Infrastructure as Code Build Operational capabilities around the Cloud platform for handing over to Operations after release Document and Design controls and governance policies around Azure platform and automate deployments of the policies Manage the end user collaboration, conduct regular sessions to educate end users on services and automation capabilities Find opportunities for automating away manual tasks Attend escalations from support teams and providing assistance during major production issues from an engineering perspective Key Requirements Experience with large cloud transformation projects preferably in Azure Extensive experience with cloud platforms, mainly Azure. Strong understanding of Azure services with demonstrated experience in AKS, App Services, LogicApps, IAM, Loadbalancers, AppGateway, NSG, storage and Azure Key Vault. Knowledge of networking concepts and protocols, including VNet, DNS, and load balancing. Writing Infrastructure as Code and pipelines preferably using Terraform, Ansible, Bash, Python and Jenkins Have written and executed Terraform based Infrastructure as Code Ability to work in both windows and Linux environment with Container platform such as Kubernetes, AKS, GKE DevOps experience with ability to use Github, Jenkins and Nexus for pipeline automation and artifact management Implementation experience of secure transports using TLS and encryption along with authentication/authorization flow Experience in certificate management for containerized applications Experience with Jenkins and similar CICD tools. Experience in GitOps would be an added advantage Good to have Python coding experience in automation or any area. Education and Qualification Bachelors degree in Computer Science, Computer Engineering, Information Technology or relevant field Minimum of 8 years of experience in IT automation with 2 years supporting Azure based cloud automation and 2 years of Kubernetes and Docker Relevant Azure Certifications · Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers
Posted 16 hours ago
3.0 years
8 - 9 Lacs
Hyderābād
On-site
You’re ready to gain the skills and experience needed to grow within your role and advance your career — and we have the perfect software engineering opportunity for you. As a Data Engineer III at JPMorgan Chase within the Consumer & Community Banking Technology Team, you are part of an agile team that works to enhance, design, and deliver the software components of the firm’s state-of-the-art technology products in a secure, stable, and scalable way. As an emerging member of a software engineering team, you execute software solutions through the design, development, and technical troubleshooting of multiple components within a technical product, application, or system, while gaining the skills and experience needed to grow within your role. Job responsibilities Executes standard software solutions, design, development, and technical troubleshooting Writes secure and high-quality code using the syntax of at least one programming language with limited guidance Designs, develops, codes, and troubleshoots with consideration of upstream and downstream systems and technical implications Applies knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation Applies technical troubleshooting to break down solutions and solve technical problems of basic complexity Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems. Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture. Design & develop data pipelines end to end using PySpark, Java, Python and AWS Services. Utilize Container Orchestration services including Kubernetes, and a variety of AWS tools and services. Learns and applies system processes, methodologies, and skills for the development of secure, stable code and systems Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years of applied experience. Hands-on practical experience in system design, application development, testing, and operational stability Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Hands-on practical experience in developing spark-based Frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark & Spark Streaming. Proficient in coding in one or more Coding languages – Core Java, Python and PySpark Experience with Relational and Datawarehouse databases, Cloud implementation experience with AWS including: AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Airflow (or) Lambda + Step Functions + Event Bridge, ECS Cluster and ECS Apps Data De/Serialization: Expertise in at least 2 of the formats: Parquet, Iceberg, AVRO, JSON AWS Data Security: Good Understanding of security concepts such as: Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager Proficiency in automation and continuous delivery methods. Preferred qualifications, capabilities, and skills Experience in Snowflake nice to have. Solid understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security. In-depth knowledge of the financial services industry and their IT systems. Practical cloud native experience preferably AWS.
Posted 16 hours ago
10.0 years
20 - 25 Lacs
Hyderābād
On-site
We are hiring a Java Architect with 10+ years of experience in Java, Spring, Spring Boot, JPA, Hibernate, JSP, JDBC, J2EE, and Struts. Expertise in designing scalable architectures and leading development teams is essential. Microservices and Cloud Architecture – Proven experience in designing and implementing microservices-based architectures and deploying them on cloud platforms like AWS, Azure, or GCP. DevOps and CI/CD Implementation – Strong understanding of DevOps practices, with hands-on experience in setting up CI/CD pipelines using tools like Jenkins, Docker, Kubernetes, and Git. Performance Tuning and Optimization – Deep expertise in application performance tuning, memory management, and scalability best practices for enterprise-level systems. Stakeholder Collaboration – Ability to collaborate with product managers, business analysts, and cross-functional teams to translate business requirements into robust technical solutions. Mentorship and Code Governance – Experience in mentoring developers, conducting code reviews, enforcing coding standards, and promoting best practices across the development team. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,500,000.00 per year Schedule: Day shift Application Question(s): In how many days you will join if you get selected ? Are you interested to work from office ? Work Location: In person
Posted 16 hours ago
1.0 years
11 - 13 Lacs
Hyderābād
Remote
Experience : 1 + Years Work location: Bangalore, Chennai, Hyderabad, Pune- Hybrid Job Description : GCP Cloud Engineer Shift Time:- 2 to 11 PM IST Budget:- Max 13 LPA Primary Skill & Weightage GCP -50% Kubernetes -25% NodeJS -25% Technical Skills Cloud: Experience working with Google Cloud Platform (GCP) services. Containers & Orchestration: Practical experience deploying and managing applications on Kubernetes. Programming: Proficiency in Node.js development, including building and maintaining RESTful APIs or backend services. Messaging: Familiarity with Apache Kafka for producing and consuming messages. Databases: Experience with PostgreSQL or similar relational databases (writing queries, basic schema design). Version Control: Proficient with Git and GitHub workflows (branching, pull requests, code reviews). Development Tools: Comfortable using Visual Studio Code (VSCode) or similar IDEs. Additional Requirements • Communication: Ability to communicate clearly in English (written and verbal). Collaboration: Experience working in distributed or remote teams. Problem Solving: Demonstrated ability to troubleshoot and debug issues independently. Learning: Willingness to learn new technologies and adapt to changing requirements. ________________________________________ Preferred but not required: Experience with CI/CD pipelines. Familiarity with Agile methodologies. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Job Type: Full-time Pay: ₹1,100,000.00 - ₹1,300,000.00 per year Schedule: UK shift Work Location: In person
Posted 16 hours ago
3.0 - 6.0 years
5 - 7 Lacs
Hyderābād
On-site
CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. As an AWS Infrastructure Engineer, you play a crucial role in building, and maintaining a cloud infrastructure on Amazon Web Services (AWS). You will also be responsible for the ownership of tasks assigned through SNOW, Dashboard, Order forms etc. The work you will do includes: Build and operate the Cloud infrastructure on AWS Continuously monitoring the health and performance of the infrastructure and resolving any issues. Using tools like CloudFormation, Terraform, or Ansible to automate infrastructure provisioning and configuration. Administer the EC2 instance’s OS such as Windows and Linux Working with other teams to deploy secure, scalable, and cost-effective cloud solutions based on AWS services. Implement monitoring and logging for Infra and Apps Keeping the infrastructure up-to-date with the latest security patches and software versions. Collaborate with development, operations and Security teams to establish best practices for software development, build, deployment, and infrastructure management Tasks related to IAM, Monitoring, Backup and Vulnerability Remediation Participating in performance testing and capacity planning activities Documentation, Weekly/Bi-Weekly Deck preparation, KB article update Handover and On call support during weekends on rotational basis QUALIFICATIONS Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in AWS Cloud, Cloud Formation template, Windows/Linux administration Understanding of 2 tier, 3 tier or multi-tier architecture Experience on IaaS/PaaS/SaaS Understanding of Disaster recovery Networking and security expertise Knowledge on PowerShell, Shell and Python Associate/Professional level certification on AWS solution architecture ITIL Foundational certification Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Understanding of container technologies such as Docker, Kubernetes and OpenShift. Understanding of Application and other infrastructure monitoring tools Understanding of end-to-end infrastructure landscape Experience on virtualization platform Knowledge on Chef, Puppet, Bamboo, Concourse etc Knowledge on Microservices, DataLake, Machine learning etc Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with AWS, System administration, IaC etc Location: Hyderabad/ Pune The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302308
Posted 16 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Description Solution Architects assess a projectâs technical feasibility, as well as implementation risks. They are responsible for the design and implementation of the overall technical and solution architecture. They define the structure of a system, its interfaces, the solution principles guiding the organisation, the software design and the implementation. The scope of the Solution Architectâs role is defined by the business issue at hand. To fulfil the role, a Solution Architect utilises business and technology expertise and experience. Job Description - Grade Specific Managing Solution/Delivery Architect - Design, deliver and manage complete solutions. Demonstrate leadership of topics in the architect community and show a passion for technology and business acumen. Work as a stream lead at CIO/CTO level for an internal or external client. Lead Capgemini operations relating to market development and/or service delivery excellence. Are seen as a role model in their (local) community. Certification: preferably Capgemini Architects certification level 2 or above, relevant solution certifications, IAF and/or industry certifications such as TOGAF 9 or equivalent. Skills (competencies) (SDLC) Methodology Active Listening Adaptability Agile (Software Development Framework) Analytical Thinking APIs Automation (Frameworks) AWS (Cloud Platform) AWS Architecture Business Acumen Business Analysis C# Capgemini Integrated Architecture Framework (IAF) Cassandra (Relational Database) Change Management Cloud Architecture Coaching Collaboration Confluence Delegation DevOps Docker ETL Tools Executive Presence GitHub Google Cloud Platform (GCP) Google Cloud Platform (GCP) (Cloud Platform) IAF (Framework) Influencing Innovation Java (Programming Language) Jira Kubernetes Managing Difficult Conversations Microsoft Azure DevOps Negotiation Network Architecture Oracle (Relational Database) Problem Solving Project Governance Python Relationship-Building Risk Assessment Risk Management SAFe Salesforce (Integration) SAP (Integration) SharePoint Slack SQL Server (Relational Database) Stakeholder Management Storage Architecture Storytelling Strategic Thinking Sustainability Awareness Teamwork Technical Governance Time Management TOGAF (Framework) Verbal Communication Written Communication Show more Show less
Posted 16 hours ago
3.0 - 7.0 years
7 - 16 Lacs
Hyderābād
On-site
AI Specialist / Machine Learning Engineer Location: On-site (hyderabad) Department: Data Science & AI Innovation Experience Level: Mid–Senior Reports To: Director of AI / CTO Employment Type: Full-time Job Summary We are seeking a skilled and forward-thinking AI Specialist to join our advanced technology team. In this role, you will lead the design, development, and deployment of cutting-edge AI/ML solutions, including large language models (LLMs), multimodal systems, and generative AI. You will collaborate with cross-functional teams to develop intelligent systems, automate complex workflows, and unlock insights from data at scale. Key Responsibilities Design and implement machine learning models for natural language processing (NLP), computer vision, predictive analytics, and generative AI. Fine-tune and deploy LLMs using frameworks such as Hugging Face Transformers, OpenAI APIs, and Anthropic Claude. Develop Retrieval-Augmented Generation (RAG) pipelines using tools like LangChain, LlamaIndex, and vector databases (e.g., Pinecone, Weaviate, Qdrant). Productionize ML workflows using MLflow, TensorFlow Extended (TFX), or AWS SageMaker Pipelines. Integrate generative AI with business applications, including Copilot-style features, chat interfaces, and workflow automation. Collaborate with data scientists, software engineers, and product managers to build and scale AI-powered products. Monitor, evaluate, and optimize model performance, focusing on fairness, explainability (e.g., SHAP, LIME), and data/model drift. Stay informed on cutting-edge AI research (e.g., NeurIPS, ICLR, arXiv) and evaluate its applicability to business challenges. Tools & Technologies Languages & Frameworks Python, PyTorch, TensorFlow, JAX FastAPI, LangChain, LlamaIndex ML & AI Platforms OpenAI (GPT-4/4o), Anthropic Claude, Mistral, Cohere Hugging Face Hub & Transformers Google Vertex AI, AWS SageMaker, Azure ML Data & Deployment MLflow, DVC, Apache Airflow, Ray Docker, Kubernetes, RESTful APIs, GraphQL Snowflake, BigQuery, Delta Lake Vector Databases & RAG Tools Pinecone, Weaviate, Qdrant, FAISS ChromaDB, Milvus Generative & Multimodal AI DALL·E, Sora, Midjourney, Runway Whisper, CLIP, SAM (Segment Anything Model) Qualifications Bachelor’s or Master’s in Computer Science, AI, Data Science, or related discipline 3–7 years of experience in machine learning or applied AI Hands-on experience deploying ML models to production environments Familiarity with LLM prompt engineering and fine-tuning Strong analytical thinking, problem-solving ability, and communication skills Preferred Qualifications Contributions to open-source AI projects or academic publications Experience with multi-agent frameworks (e.g., AutoGPT, OpenDevin) Knowledge of synthetic data generation and augmentation techniques Job Type: Permanent Pay: ₹734,802.74 - ₹1,663,085.14 per year Benefits: Health insurance Provident Fund Schedule: Day shift Work Location: In person
Posted 16 hours ago
3.0 - 5.0 years
0 Lacs
Hyderābād
On-site
Be a part of a team that’s ensuring Dell Technologies' product integrity and customer satisfaction. Our IT Software Engineer team turns business requirements into technology solutions by designing, coding and testing/debugging applications, as well as documenting procedures for use and constantly seeking quality improvements. Join us to do the best work of your career and make a profound social impact as a Software Engineer 2-IT on our Software Engineer-IT Team in Hyderabad What you’ll achieve As an IT Software Engineer, you will deliver products and improvements for a changing world. Working at the cutting edge, you will craft and develop software for platforms, peripherals, applications and diagnostics — all with the most sophisticated technologies, tools, software engineering methodologies and partnerships. You will: Work with complicated business applications across functional areas Take design from concept to production, which may include design reviews, feature implementation, debugging, testing, issue resolution and factory support Manage design and code reviews with a focus on the best user experience, performance, scalability and future expansion Take the first step towards your dream career Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role: Essential Requirements Strong experience in scripting languages like Bash, Python, or Groovy for automation. Working experience of Git-Based workflows and understanding of CI/CD pipelines, runners, and YAML configuration. Hands-on experience with Docker/Kaniko , Kubernetes and microservice-based deployments. Strong knowledge with GitLab, Ansible, Terraform, and monitoring tools like Prometheus and Grafana. Experience troubleshooting deployment or integration issues and optimize CI/CD pipelines efficiently Desirable Requirements 3 to 5 years of experience in software/coding/IT software Who we are We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you. Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us. Application closing date: 30-July-25 Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Job ID: R270155
Posted 16 hours ago
6.0 - 10.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 84234 Date: Jun 15, 2025 Location: Delhi Designation: Senior Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.
Posted 16 hours ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
Company Overview Docusign brings agreements to life. Over 1.5 million customers and more than a billion people in over 180 countries use Docusign solutions to accelerate the process of doing business and simplify people’s lives. With intelligent agreement management, Docusign unleashes business-critical data that is trapped inside of documents. Until now, these were disconnected from business systems of record, costing businesses time, money, and opportunity. Using Docusign’s Intelligent Agreement Management platform, companies can create, commit, and manage agreements with solutions created by the #1 company in e-signature and contract lifecycle management (CLM). What you'll do Docusign is looking for a highly motivated Software Engineer, Backend to join our Search Platform team. The ideal candidate enjoys fast-paced entrepreneurial environments where you can solve difficult problems using current technologies and tools. They will collaborate well with other team members when brainstorming, designing, and implementing new solutions, think about ways to improve processes and make the team more effective, and mentor and model engineering best practices. This role will work on a complex ecosystem in the cloud with a focus on scale and availability. The Software Engineer will be responsible for developing software solutions using object-oriented methodologies, design patterns, and building scalable, highly available systems. This position is an individual contributor role reporting to the Director of Engineering. Responsibility Write high quality code in C# .Net and other object-oriented languages that is easy to maintain and test Maintain a data-focused approach, ensuring 5-9’s availability and that we are solving the right problems Participate in an Agile environment using Scrum software development practices, automated unit testing, CI/CD, code reviews, and version control systems (GIT) Raise issues proactively that might impact delivery commitments Diagnose and resolve production impacting issues and maintain the code as needed Drive strategic code sharing and architecture for one or more functional area Job Designation Hybrid: Employee divides their time between in-office and remote work. Access to an office location is required. (Frequency: Minimum 2 days per week; may vary by team but will be weekly in-office expectation) Positions at Docusign are assigned a job designation of either In Office, Hybrid or Remote and are specific to the role/job. Preferred job designations are not guaranteed when changing positions within Docusign. Docusign reserves the right to change a position's job designation depending on business needs and as permitted by local law. What you bring Basic BA/BS degree or equivalent work experience 5+ years industry experience in Software Engineering Experience with C#, Java, C++, or Go Experience with public cloud (e.g., Azure or AWS) and Kubernetes Experience developing software solutions using object-oriented methodologies and design patterns Experience with public cloud Preferred Curiosity to learn new technologies and toolsets Experience with Microsoft technology stack (e.g., C#, .NET), JSON, NoSQL Databases Experience with large scale search engines. Experience with Lucene or Elastic search search technologies. Experience building Cloud Native Services using REST APIs, Microservices based architectures, and containerized technologies (e.g., K8S, and Docker) Experience with Document and Document Conversion Experience with telemetry software Experience with Git, continuous integration, and deployment tools Experience working in an agile development environment Life at Docusign Working here Docusign is committed to building trust and making the world more agreeable for our employees, customers and the communities in which we live and work. You can count on us to listen, be honest, and try our best to do what’s right, every day. At Docusign, everything is equal. We each have a responsibility to ensure every team member has an equal opportunity to succeed, to be heard, to exchange ideas openly, to build lasting relationships, and to do the work of their life. Best of all, you will be able to feel deep pride in the work you do, because your contribution helps us make the world better than we found it. And for that, you’ll be loved by us, our customers, and the world in which we live. Accommodation Docusign is committed to providing reasonable accommodations for qualified individuals with disabilities in our job application procedures. If you need such an accommodation, or a religious accommodation, during the application process, please contact us at accommodations@docusign.com. If you experience any issues, concerns, or technical difficulties during the application process please get in touch with our Talent organization at taops@docusign.com for assistance. Applicant and Candidate Privacy Notice Show more Show less
Posted 16 hours ago
6.0 - 10.0 years
0 Lacs
Delhi
On-site
Job requisition ID :: 84245 Date: Jun 15, 2025 Location: Delhi Designation: Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.
Posted 16 hours ago
0 years
0 Lacs
Gurgaon
On-site
We are seeking an experienced Azure Migration Engineer to lead the migration of on-premises infrastructure and applications to Microsoft Azure. The ideal candidate will have in-depth expertise in Azure services, tools, and frameworks, with a strong capability in modernizing applications and designing future-state cloud architectures. This role demands technical leadership, hands-on proficiency with automation, and a solid understanding of enterprise-scale cloud transformation. Roles and Responsibilities Discovery and Assessment: Conduct in-depth discovery and analysis of current on-premises environments using Azure tools such as Azure Migrate , Azure Assessment and Planning Toolkit , and Azure Monitor . Provide strategic migration recommendations and roadmaps based on the assessment. Azure Services Implementation: Design and implement scalable, secure, and cost-effective solutions using Azure services such as Azure Virtual Machines , Azure Virtual Network (VNet) , Azure SQL Database , Azure Blob Storage , Azure Functions , Azure Monitor , and Azure ExpressRoute . Networking Setup: Configure secure, high-performance hybrid connectivity between on-premises and Azure using Azure ExpressRoute and Site-to-Site VPNs . Troubleshoot and optimize networking setups for minimal latency and high reliability. Automation and Infrastructure-as-Code: Utilize Terraform , ARM templates , and Bicep for infrastructure provisioning. Automate deployments and configuration management using tools like Ansible , and scripting languages such as PowerShell , Bash , and Python . Application Modernization: Refactor and re-architect legacy applications to adopt Azure App Services , Azure Kubernetes Service (AKS) , and microservices-based cloud-native designs. Improve performance, scalability, and deployment agility. Cloud Architecture Design: Define and implement target state architectures for applications and infrastructure on Azure. Ensure solutions follow Microsoft Cloud Adoption Framework (CAF) best practices, emphasizing security, scalability, and governance. Qualifications Hands-on experience in migrating on-premises workloads and applications to Microsoft Azure. Strong command of Azure services and tools for migration, infrastructure, and monitoring. Proficiency with Terraform , Ansible , PowerShell/Bash scripting , and Python . In-depth understanding of networking principles, including ExpressRoute and VPN configurations. Experience in modernizing applications using Azure-native services and containerization strategies. Expertise in designing secure, resilient, and scalable architectures on Azure. Strong troubleshooting and problem-solving abilities. BE/BTech in Computer Science, IT, or equivalent. Microsoft Azure Certifications (e.g., Azure Solutions Architect Expert , Azure Administrator Associate , Azure DevOps Engineer Expert ). Familiarity with DevOps practices , CI/CD pipelines , and tools like Azure DevOps or GitHub Actions . Experience working in Agile or Scrum-based environments.
Posted 16 hours ago
5.0 years
0 Lacs
Gurgaon
Remote
About Us: At apexanalytix, we’re lifelong innovators! Since the date of our founding nearly four decades ago we’ve been consistently growing, profitable, and delivering the best procure-to-pay solutions to the world. We’re the perfect balance of established company and start-up. You will find a unique home here. And you’ll recognize the names of our clients. Most of them are on The Global 2000. They trust us to give them the latest in controls, audit and analytics software every day. Industry analysts consistently rank us as a top supplier management solution, and you’ll be helping build that reputation. Read more about apexanalytix - https://www.apexanalytix.com/about/ Job Details The Role Quick Take - We are looking for a highly skilled systems engineer with experience working with Virtualization, Linux, Kubernetes, and Server Infrastructure. The engineer will be responsible to design, deploy, and maintain enterprise-grade cloud infrastructure using Apache CloudStack or similar technology, Kubernetes on Linux operating system. The Work - Hypervisor Administration & Engineering Architect, deploy, and manage Apache CloudStack for private and hybrid cloud environments. Manage and optimize KVM or similar virtualization technology Implement high-availability cloud services using redundant networking, storage, and compute. Automate infrastructure provisioning using OpenTofu, Ansible, and API scripting. Troubleshoot and optimize hypervisor networking (virtual routers, isolated networks), storage, and API integrations. Working experience with shared storage technologies like GFS and NFS. Kubernetes & Container Orchestration Deploy and manage Kubernetes clusters in on-premises and hybrid environments. Integrate Cluster API (CAPI) for automated K8s provisioning. Manage Helm, Azure Devops, and ingress (Nginx/Citrix) for application deployment. Implement container security best practices, policy-based access control, and resource optimization. Linux Administration Configure and maintain RedHat HA Clustering (Pacemaker, Corosync) for mission-critical applications. Manage GFS2 shared storage, cluster fencing, and high-availability networking. Ensure seamless failover and data consistency across cluster nodes. Perform Linux OS hardening, security patching, performance tuning, and troubleshooting. Physical Server Maintenance & Hardware Management Perform physical server installation, diagnostics, firmware upgrades, and maintenance. Work with SAN/NAS storage, network switches, and power management in data centers. Implement out-of-band management (IPMI/iLO/DRAC) for remote server monitoring and recovery. • Ensure hardware resilience, failure prediction, and proper capacity planning. Automation, Monitoring & Performance Optimization • Automate infrastructure provisioning, monitoring, and self-healing capabilities. Implement Prometheus, Grafana, and custom scripting via API for proactive monitoring. • Optimize compute, storage, and network performance in large-scale environments. • Implement disaster recovery (DR) and backup solutions for cloud workloads. Collaboration & Documentation • Work closely with DevOps, Enterprise Support, and software Developers to streamline cloud workflows. • Maintain detailed infrastructure documentation, playbooks, and incident reports. Train and mentor junior engineers on CloudStack, Kubernetes, and HA Clustering. The Must-Haves - 5+ years of experience in CloudStack or similar virtualization platform, Kubernetes, and Linux system administration. Strong expertise in Apache CloudStack (4.19+) or similar virtualization platform, KVM hypervisor, and Cluster API (CAPI). Extensive experience in RedHat HA Clustering (Pacemaker, Corosync) and GFS2 shared storage. Proficiency in OpenTofu, Ansible, Bash, Python, and Go for infrastructure automation. Experience with networking (VXLAN, SDN, BGP) and security best practices. Hands-on expertise in physical server maintenance, IPMI/iLO, RAID, and SAN storage. Strong troubleshooting skills in Linux performance tuning, logs, and kernel debugging. Knowledge of monitoring tools (Prometheus, Grafana, Alert manager). Preferred Qualifications • Experience with multi-cloud (AWS, Azure, GCP) or hybrid cloud environments. • Familiarity with CloudStack API customization, plugin development. • Strong background in disaster recovery (DR) and backup solutions for cloud environments. • Understanding of service meshes, ingress, and SSO. • Experience is Cisco UCS platform management. Over the years, we’ve discovered that the most effective and successful associates at apexanalytix are people who have a specific combination of values, skills, and behaviors that we call “The apex Way”. Read more about The apex Way - https://www.apexanalytix.com/careers/ Benefits At apexanalytix we know that our associates are the reason behind our successes. We truly value you as an associate and part of our professional family. Our goal is to offer the very best benefits possible to you and your loved ones. When it comes to benefits, whether for yourself or your family the most important aspect is choice. And we get that. apexanalytix offers competitive benefits for the countries that we serve, in addition to our BeWell@apex initiative that encourages employees’ growth in six key wellness areas: Emotional, Physical, Community, Financial, Social, and Intelligence. With resources such as a strong Mentor Program, Internal Training Portal, plus Education, Tuition, and Certification Assistance, we provide tools for our associates to grow and develop.
Posted 16 hours ago
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
We are looking for a highly skilled MERN Stack Tech Lead with expertise in Next.js and Node.js to join our dynamic team. The ideal candidate will have a strong background in full-stack JavaScript development, excellent leadership skills, and experience managing development teams. You will be responsible for designing and developing high-performance web applications, ensuring best practices, and mentoring a team of developers. Key Responsibilities: Lead and mentor a team of MERN stack developers. Design and develop scalable, high-performance web applications using MongoDB, Express.js, React.js, and Node.js. Utilize Next.js for server-side rendering (SSR), static site generation (SSG), and performance optimization. Develop and optimize APIs using Node.js and Express.js. Ensure high-quality code by enforcing coding standards, performing code reviews, and implementing best practices. Collaborate with cross-functional teams including UI/UX designers, backend developers, and product managers. Implement and maintain security best practices, authentication mechanisms, and authorization protocols. Manage deployment processes using CI/CD pipelines and cloud services such as AWS, Azure, or Google Cloud. Troubleshoot and optimize performance issues for web applications. Stay updated with the latest industry trends, tools, and technologies. Required Skills & Qualifications: 5+ years of hands-on experience in MERN Stack development. Strong proficiency in Next.js and Node.js. Experience with TypeScript is a plus. Strong knowledge of RESTful APIs and GraphQL. Proficiency in database design and management using MongoDB. Hands-on experience with state management libraries (e.g., Redux, Context API, Recoil). Strong understanding of server-side rendering (SSR) and static site generation (SSG) with Next.js. Experience with Docker, Kubernetes, and cloud platforms is a plus. Ability to write clean, maintainable, and efficient code. Excellent problem-solving and debugging skills. Strong leadership and team management abilities. At Least manage 5 members in a team. Excellent communication and collaboration skills Show more Show less
Posted 16 hours ago
3.0 years
8 - 15 Lacs
Mohali
On-site
Job Information Date Opened 06/16/2025 Job Type Full time Industry IT Services Work Experience 3+ Years Salary 8-15 LPA City Mohali State/Province Punjab Country India Zip/Postal Code 160071 Job Description ABOUT XENONSTACK XenonStack is the fastest-growing data and AI foundry for agentic systems, which enables people and organizations to gain real-time and intelligent business insights. Building Agentic Systems for AI Agents with https://www.akira.ai Vision AI Platform with https://www.xenonstack.ai Inference AI Infrastructure for Agentic Systems - https://www.nexastack.ai THE OPPORTUNITY We are seeking an experienced DevOps Engineer with 3-6 years of experience in implementing and reviewing CI/CD pipelines, cloud deployments, and automation tasks. If you have a strong foundation in cloud technologies, containerization, and DevOps best practices, we would love to have you on our team. JOB ROLES AND RESPONSIBILITIES Develop and maintain CI/CD pipelines to automate the deployment and testing of applications across AWS and Private Cloud. Assist in deploying applications and services to cloud environments while ensuring optimal configuration and security practices. Implement monitoring solutions to ensure infrastructure health and performance; troubleshoot issues as they arise in production environments. Automate repetitive tasks and manage cloud infrastructure using tools like Terraform, CloudFormation, and scripting languages (Python, Bash). Work closely with software engineers to integrate deployment pipelines with application codebases and streamline workflows. Ensure efficient resource management in the cloud, monitor costs, and optimize usage to reduce waste. Create detailed documentation for DevOps processes, deployment procedures, and troubleshooting steps to ensure clarity and consistency across the team. Requirements SKILLS REQUIREMENTS 2-4 years of experience in DevOps or cloud infrastructure engineering. Proficiency in cloud platforms on AWS, and hands-on experience with their core services (EC2, S3, RDS, Lambda, etc.). Advanced knowledge of CI/CD tools such as Jenkins, GitLab CI, or CircleCI, and hands-on experience implementing and managing CI/CD pipelines. Experience with containerization technologies like Docker and Kubernetes for deploying applications at scale. Strong knowledge of Infrastructure-as-Code (IaC) using tools like Terraform or CloudFormation. Proficient in scripting languages such as Python and Bash for automating infrastructure tasks and deployments. Understanding of monitoring and logging tools like Prometheus, Grafana, ELK Stack, or CloudWatch to ensure system performance and uptime. Strong understanding of Linux-based operating systems and cloud-based infrastructure management. Bachelor’s degree in Computer Science, Information Technology, or related field. Benefits CAREER GROWTH AND BENEFITS Continuous Learning & Growth Access to training, certifications, and hands-on sessions to enhance your DevOps and cloud engineering skills. Opportunities for career advancement and leadership roles in DevOps engineering. Recognition & Rewards Performance-based incentives and regular feedback to help you grow in your career. Special recognition for contributions towards streamlining and improving DevOps practices. Work Benefits & Well-Being Comprehensive health insurance and wellness programs to ensure a healthy work-life balance. Cab facilities for women employees and additional allowances for project-based tasks. XENONSTACK CULTURE - JOIN US & MAKE AN IMPACT Here at XenonStack, we have a culture of cultivation with bold, courageous, and human-centric leadership principles. We value obsession and deep work in everything we do. We are on a mission to disrupt and reshape the category and welcome people with that mindset and ambition. If you are energised by the idea of shaping the future of AI in Business processes and enterprise systems, there’s nowhere better for you than XenonStack Product Value and Outcome - Simplifying the user experience with AI Agents and Agentic AI 1) Obsessed with Adoption : We design everything with the goal of making AI more accessible and simplifying the business processes and enterprise systems essential to adoption. 2) Obsessed with Simplicity : We simplify even the most complex challenges to create seamless, intuitive experiences with AI agents and Agentic AI. Be a part of XenonStack’s Vision and Mission for Accelerating the world's transition to AI + Human Intelligence.
Posted 16 hours ago
0 years
0 - 0 Lacs
India
On-site
DevOps Engineer – Intern Location :KIIT TBI, BHUBANESWAR Duration : 3–4 months About Us We’re looking for a passionate and self-motivated DevOps Intern to assist our engineering team in automating infrastructure and improving deployment pipelines. Key Responsibilities Assist in setting up and maintaining CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins. Help manage infrastructure with tools like Docker, Kubernetes, and Terraform (under guidance). Support in automating routine development, build, and deployment tasks. Work with developers to ensure smooth deployments and rollback strategies. Monitor applications and infrastructure with basic observability tools (Grafana, Prometheus, etc.). Learn and apply DevOps best practices including version control, containerization, and scripting. Who You Are Currently pursuing or recently completed a degree in Computer Science, IT, or related field. Basic understanding of Linux, shell scripting, and version control (Git). Exposure to cloud platforms (AWS, Azure, or GCP) is a plus. Familiarity with Docker or Kubernetes is a bonus—not mandatory. Eager to learn and grow in a fast-paced DevOps/CloudOps environment. Good communication and collaboration skills. Nice to Have (Not Mandatory) Experience with a personal or academic project using DevOps tools. Participation in open-source or hackathon projects. What You’ll Gain Real-world exposure to DevOps practices in a production environment. Opportunity to convert to a full-time role based on performance. Experience with modern cloud-native tools and practices. Certificate and letter of recommendation upon successful completion. How to Apply Please share your: Resume GitHub/portfolio links (if available) A short note on why you’re interested in DevOps Job Types: Fresher, Internship Contract length: 3 months Pay: ₹5,000.00 - ₹6,000.00 per month Benefits: Flexible schedule Leave encashment Paid time off Provident Fund Schedule: Day shift Fixed shift Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Ability to commute/relocate: Patia, Bhubaneswar, Orissa: Reliably commute or planning to relocate before starting work (Required) Application Question(s): when can you join us if selected? this is urgent opening.. What is DevOps in your own words? What operating systems have you worked with (Linux, Windows, etc.)? Education: Bachelor's (Preferred) Location: Patia, Bhubaneswar, Orissa (Preferred) Work Location: In person Application Deadline: 28/06/2025 Expected Start Date: 30/06/2025
Posted 16 hours ago
9.0 years
0 Lacs
India
On-site
Today Top Secret/SCI Unspecified Unspecified IT - Data Science Camp Smith, HI (ON-SITE/OFFICE) Description SAIC Is seeking a Data Scientist to support our work at Camp Smith. This position will work to d evelop and implement data analytics techniques and applications to transform raw data into meaningful information using data-oriented programming languages and visualization software. This candidate will: Applies data mining, data modeling, statistics, graph algorithms and machine learning to extract and analyze information from large structured and unstructured datasets to support analytics objectives. Visualizes, interprets, and reports data findings in dynamic data reports. Employs a variety of data manipulation and visualization tools to best convey information/results to customers. Comfortable working with data in a variety of formats including excel, CSV, JSON, XML. Supports the design, development, testing and implementation of web-based collaboration tools & platforms for data reporting. Plans and conducts software integration or testing, including analyzing and implementing test plans and scripts, in support of analytics objectives. Demonstrates proficiency with frequent scripting language use, such as Python (primary) or R and using packages commonly used in data science applications or advanced analytics such as SQL. Familiar with Kubernetes clusters and utilization of tools such as Prometheus or similar. Conduct exploratory data analysis for testing hypothesis. Utilization of Microsoft's Power BI, Tableau, and other toolsets to visualize data and share insights with senior decision makers. Proficient in Grafana as an open-source analytics and interactive visualization web application for monitoring application performance. Qualifications Required Technical Skills: Experience with scripting and programming, including Python and Java. Experience with data visualization tools and dashboard development such as Grafana, Power BI, or Tableau. Knowledge of probability, statistics, and machine learning. Experience with NoSQL databases, such as MongoDB or Accumulo and RDBMS databases such as Postgres or MySQL. Applied statistics skills, such as distributions, statistical testing, and regression. Scripting and programming skills using Python, scikit-learn library and other statistical analysis libraries/framework. Understanding of common programming paradigms APIs (pub/sub, REST, etc.). Worked in a medium to large-scale project using configuration management tools (Atlassian suite, Git, Github/Gitlab.). Required Personal Skills: Strong interpersonal, communication, and presentation skills. Able to manage and prioritize tasks to ensure optimum productivity. Able to present technical briefs, giving full attention to what other people are saying, taking time to understand the points being made, asking questions as appropriate. Able to lead discussions pertaining to technical subject matter. Have effective customer service skills required to interface with corporate and government customers. Able to effectively communicate with the Customer, command staff, and peer contractor personnel. Able to effectively operate standard computer-based business tools (including but not limited to Microsoft Word and Excel). Able to demonstrate excellent (clear and concise) written communication skills in a technical format, that will support in the development of all required plans and reports required of the program. Required Experience and Education: Bachelor of Science in the following preferred fields: Computer Science, Engineering, Information Systems, Information Technology or related fields. Additional experience and certifications may be considered in lieu of a degree. 9+ years of relevant experience. Target salary range: $120,001 - $160,000. The estimate displayed represents the typical salary range for this position based on experience and other factors. SAIC accepts applications on an ongoing basis and there is no deadline. Covid Policy: SAIC does not require COVID-19 vaccinations or boosters. Customer site vaccination requirements must be followed when work is performed at a customer site. GROUP ID: 10111346 R Recruiter APPLY NOW
Posted 16 hours ago
1.0 years
11 - 13 Lacs
Pune
Remote
Experience : 1 + Years Work location: Bangalore, Chennai, Hyderabad, Pune- Hybrid Job Description : GCP Cloud Engineer Shift Time:- 2 to 11 PM IST Budget:- Max 13 LPA Primary Skill & Weightage GCP -50% Kubernetes -25% NodeJS -25% Technical Skills Cloud: Experience working with Google Cloud Platform (GCP) services. Containers & Orchestration: Practical experience deploying and managing applications on Kubernetes. Programming: Proficiency in Node.js development, including building and maintaining RESTful APIs or backend services. Messaging: Familiarity with Apache Kafka for producing and consuming messages. Databases: Experience with PostgreSQL or similar relational databases (writing queries, basic schema design). Version Control: Proficient with Git and GitHub workflows (branching, pull requests, code reviews). Development Tools: Comfortable using Visual Studio Code (VSCode) or similar IDEs. Additional Requirements • Communication: Ability to communicate clearly in English (written and verbal). Collaboration: Experience working in distributed or remote teams. Problem Solving: Demonstrated ability to troubleshoot and debug issues independently. Learning: Willingness to learn new technologies and adapt to changing requirements. ________________________________________ Preferred but not required: Experience with CI/CD pipelines. Familiarity with Agile methodologies. Exposure to monitoring/logging tools (e.g., Prometheus, Grafana, ELK stack). Job Type: Full-time Pay: ₹1,100,000.00 - ₹1,300,000.00 per year Schedule: UK shift Work Location: In person
Posted 16 hours ago
5.0 years
0 Lacs
India
On-site
Job Description Job Title : DevOps Engineer Company Name : Web Minds IT Solution , Pune Employment Type: Full-time Experience : 5 – 10 yr Job Description : We are seeking an experienced DevOps Engineer to design, implement, and manage scalable infrastructure and CI/CD pipelines. You will work closely with development, QA, and operations teams to automate deployments, optimize cloud resources, and enhance system reliability. The ideal candidate has strong expertise in cloud platforms, containerization, and infrastructure as code. This role is key to driving DevOps best practices, improving delivery speed, and ensuring high system availability. Qualification : Bachelor’s degree in Computer Science, IT, or related field (required) . Master’s degree or relevant certifications (e.g., AWS, Kubernetes, Terraform). 5 to 10 years of proven experience in DevOps, infrastructure automation, and cloud environments. Experience with CI/CD, containerization, and infrastructure as code. Relevant certifications (e.g., AWS Certified DevOps Engineer, CKA/CKAD, Terraform Associate) Job Responsibilities: Design, implement, and maintain enterprise-grade CI/CD pipelines for efficient software delivery Manage and automate cloud infrastructure (AWS, Azure, or GCP) with strong emphasis on security, scalability, and cost-efficiency Develop and maintain Infrastructure as Code (IaC) using tools like Terraform, Ansible, or CloudFormation Orchestrate and manage containerized environments using Docker and Kubernetes Implement and optimize monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK, Datadog) Ensure high availability, disaster recovery, and performance tuning of systems Collaborate with development, QA, and security teams to enforce DevSecOps best practices Lead troubleshooting of complex infrastructure and deployment issues in production environments Mentor junior team members and contribute to DevOps strategy and architecture Required Skills : Strong hands-on experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.) Proficient with cloud platforms (AWS preferred, Azure or GCP acceptable) Expertise in Infrastructure as Code using Terraform, Ansible, or CloudFormation Deep understanding of Docker and Kubernetes for containerization and orchestration Strong scripting skills (Bash, Python, or Shell) for automation Experience with monitoring, logging, and alerting tools (e.g., Prometheus, ELK, Grafana, CloudWatch) Solid grasp of Linux system administration, networking concepts, and security best practices Familiarity with version control tools like Git and branching strategies Soft Skills : Strong problem-solving and analytical thinking Excellent communication and collaboration skills Ability to work in a fast-paced, dynamic environment Proactive mindset with a focus on automation, reliability, and scalability Job Type: Full-time Schedule: Day shift Fixed shift Supplemental Pay: Performance bonus Work Location: In person Speak with the employer +91 8080963983
Posted 16 hours ago
0 years
12 - 20 Lacs
India
On-site
Backend & Frontend Expertise: Strong proficiency in Python and FastAPI for microservices. Strong in TypeScript/Node.js for GraphQL/RESTful API interfaces. ● Cloud & Infra Application: Hands-on AWS experience, proficient with existing Terraform. Working knowledge of Kubernetes/Argo CD for deployment/troubleshooting. ● CI/CD & Observability: Designs and maintains GitHub Actions pipelines. Implements OpenTelemetry for effective monitoring and debugging. ● System Design: Experience designing and owning specific microservices (APIs, data models, integrations). ● Quality & Testing: Drives robust unit, integration, and E2E testing. Leads code reviews. ● Mentorship: Guides junior engineers, leads technical discussions for features. Senior Engineers ● Python and FastAPI ● TypeScript and Node.js ● GraphQL/RESTful API interfaces ● AWS ● Terraform ● Working knowledge of Kubernetes/Argo CD for deployment/troubleshooting. ● CI/CD via GitHub Actions pipelines ● OpenTelemetry ● unit, integration, and E2E testing Job Type: Full-time Pay: ₹1,250,000.00 - ₹2,000,000.00 per year Benefits: Paid time off Schedule: Day shift Monday to Friday Work Location: In person
Posted 16 hours ago
5.0 years
2 - 5 Lacs
Mumbai
On-site
Company Description Quantanite is a customer experience (CX)solutions company that helpsfast-growing companies and leading global brandsto transformand grow. We do thisthrough a collaborative and consultative approach,rethinking business processes and ensuring our clients employ the optimalmix of automationand human intelligence.We are an ambitiousteamof professionals spread acrossfour continents and looking to disrupt ourindustry by delivering seamless customerexperiencesforour clients,backed-upwithexceptionalresults.We havebig dreams, and are constantly looking for new colleaguesto join us who share our values, passion and appreciationfordiversity. Job Description About the Role As a DevOps Engineer you will work closely with our global teams to learn about the business and technical requirements and formulate the necessary infrastructure and resource plans to properly support the growth and maintainability of various systems. Key Responsibilities Implement a diverse set of development, testing, and automation tools, as well as manage IT infrastructure. Plan the team structure and activities, and actively participate in project management. Comprehend customer requirements and project Key Performance Indicators (KPIs). Manage stakeholders and handle external interfaces effectively. Set up essential tools and infrastructure to support project development. Define and establish DevOps processes for development, testing, release, updates, and support. Possess the technical expertise to review, verify, and validate software code developed in the project. Engage in software engineering tasks, including designing and developing systems to enhance reliability, scalability, and operational efficiency through automation. Collaborate closely with agile teams to ensure they have the necessary tools for seamless code writing, testing, and deployment, promoting satisfaction among development and QA teams. Monitor processes throughout their lifecycle, ensuring adherence, identifying areas for improvement, and minimizing wastage. Advocate and implement automated processes whenever feasible. Identify and deploy cybersecurity measures by continuously performing vulnerability assessments and managing risk. Handle incident management and conduct root cause analysis for continuous improvement. Coordinate and communicate effectively within the team and with customers. Build and maintain continuous integration (CI) and continuous deployment (CD) environments, along with associated processes and tools. Qualifications About the Candidate Proven 5 years of experience with Linux based infrastructure and proficient in scripting language. Must have solid cloud computing skills such as network management, cloud computing and cloud databases in any one of the public clouds (AWS, Azure or GCP) Must have hands-on experience in setting up and managing cloud infrastructure like Kubernetes, VPC, VPN, Virtual Machines, Cloud Databases etc. Experience in IAC (Infrastructure as Code) tools like Ansible, Terraform. Must have hands-on experience in coding and scripting in at least one of the following: Shell, Python, Groovy Experience as a DevOps Engineer or similar software engineering role. Experienced in establishing an optimized CI / CD environment relevant to the project. Automation using scripting language like Perl/python and shell scripts like BASH and CSH. Good knowledge of configuration and building tools like Bazel, Jenkins etc. Good knowledge of repository management tools like Git, Bit Bucket etc. Good knowledge of monitoring solutions and generating insights for reporting Excellent debugging skills/strategies. Excellent communication skills. Experienced in working in an Agile environment. Additional Information Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress: Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement: Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development: We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events: Regular team and organisation-wide get-togethers and events. Value orientation: Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. Future development At Quantanite, youʼll have a personal development plan to help you improve in the areas youʼre looking to develop in over the coming years. Your manager will dedicate time and resources to supporting you in getting you to the next level. Youʼll also have the opportunity to progress internally. As a fast growing organisation, our teams are growing, and youʼll have the chance to take on more responsibility over time. So, if youʼre looking for a career full of purpose and potential, weʼd love to hear from you!
Posted 16 hours ago
0 years
2 - 10 Lacs
Pune
On-site
India Information Technology (IT) Group Functions Job Reference # 319292BR City Pune Job Type Full Time Your role Are you an enthusiastic technology professional, interested in state-of-the-art frameworks, tools and techniques? Are you eager to understand and fulfill our clients’ needs? At UBS, we re-imagine the way we work, the way we connect with each other – our colleagues, clients and partners – and the way we deliver value. Being agile will make us more responsive, more adaptable, and ultimately more innovative. We’re looking for a Software Engineer with Linux experience. You will be required to: be responsible for designing and building critical components to successfully deliver solutions work with a global team of analysts, engineers and business stakeholders be overall responsible for creating impactful change to our clients through the delivery of our products while ensuring the high quality and compliance of the product with risk and security policies take ownership and drive software deliveries embrace complex business requirements and enjoy the challenge of implementing them provide engineering and analytical skills identify opportunities to improve our processes mentor junior team members contribute widely in establishing and promoting best practices and pro-actively investigate new technologies be able to perform code reviews across our team and enforce Enterprise Application design and architectural standards work in an agile team, with a hybrid working model Your team Based out of India, you will be part of the Document Creation and Distribution Crew, which is part of the foundation stream of Client Document and Records Management. In our agile operating model, crews are aligned to larger products and services fulfilling client needs and encompass multiple autonomous pods. You will work in a cross functional and agile team responsible for the Output Management solutions. Your expertise robust experience in the development, design, maintenance and integration of Java software solutions experience with the full software development life cycle and Agile Methodologies strong analytical and problem-solving skills hands-on Experience with the Spring Framework, RESTful APIs, Maven, and GitLab experience with Spring Boot is a plus knowledge of Software Design Patterns, and Enterprise/ Integration patterns knowledge of Database Systems familiarity with testing methodologies experience with Cloud Computing Platforms - i.e. Microsoft Azure experience with Ansible and CI/CD (nice to have) experience with Kubernetes (nice to have) strong communicator – able to interface with key business and technology stakeholders organized, well-structured and with drive to deliver excellent at time management About us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors. We have a presence in all major financial centers in more than 50 countries. How we hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we know that it's our people, with their diverse skills, experiences and backgrounds, who drive our ongoing success. We’re dedicated to our craft and passionate about putting our people first, with new challenges, a supportive team, opportunities to grow and flexible working options when possible. Our inclusive culture brings out the best in our employees, wherever they are on their career journey. We also recognize that great work is never done alone. That’s why collaboration is at the heart of everything we do. Because together, we’re more than ourselves. We’re committed to disability inclusion and if you need reasonable accommodation/adjustments throughout our recruitment process, you can always contact us. Contact Details UBS Business Solutions SA UBS Recruiting Disclaimer / Policy statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce.
Posted 16 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2