Home
Jobs

3929 Gitlab Jobs - Page 24

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

HackerOne is a global leader in offensive security solutions. Our HackerOne Platform combines AI with the ingenuity of the largest community of security researchers to find and fix security, privacy, and AI vulnerabilities across the software development lifecycle. The platform offers bug bounty, vulnerability disclosure, pentesting, AI red teaming, and code security. We are trusted by industry leaders like Amazon, Anthropic, Crypto.com, General Motors, GitHub, Goldman Sachs, Uber, and the U.S. Department of Defense. HackerOne was named a Best Workplace for Innovators by Fast Company in 2023 and a Most Loved Workplace for Young Professionals in 2024. HackerOne Values HackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability. Senior Analytics Engineer, DataOne Location: Pune, India This role requires the candidate to be based in Pune and work from an office 4 or 5 days a week. Please only apply if you're okay with these requirements. *** Position Summary HackerOne is seeking a Senior Analytics Engineer to join our DataOne team. You will lead the discovery, architecture, and development of high-impact, high-performance, scalable source of truth data marts and data products. Joining our growing, distributed organization, you'll be instrumental in building the foundation that powers HackerOne's one source of truth. As a Senior Analytics Engineer, you'll be able to lead challenging projects and foster collaboration across the company. Leveraging your extensive technological expertise, domain knowledge, and dedication to business objectives, you'll drive innovation to propel HackerOne forward. DataOne democratizes source-of-truth information and insights to enable all Hackeronies to ask the right questions, tell cohesive stories, and make rigorous decisions so that HackerOne can delight our Customers and empower the world to build a safer internet . The future is one where every Hackeronie is a catalyst for positive change , driving data-informed innovation while fostering our culture of transparency, collaboration, integrity, excellence, and respect for all . What You Will Do Your first 30 days will focus on getting to know HackerOne. You will join your new squad and begin onboarding - learn our technology stack (Python, Airflow, Snowflake, DBT, Meltano, Fivetran, Looker, AWS), and meet our Hackeronies. Within 60 days, you will deliver impact on a company level with consistent contribution to high-impact, high-performance, scalable source of truth data marts and data products. Within 90 days, you will drive the continuous evolution and innovation of data at HackerOne, identifying and leading new initiatives. Additionally, you foster cross-departmental collaboration to enhance these efforts. Deliver impact by developing the roadmap for continuously and iteratively launching high-impact, high-performance, scalable source of truth data marts and data products, and by leading and delivering cross-functional product and technical initiatives. Be a technical paragon and cross-functional force multiplier, autonomously determining where to apply focus, contributing at all levels, elevating your squad, and designing solutions to ambiguous business challenges, in a fast-paced early-stage environment. Drive continuous evolution and innovation, the adoption of emerging technologies, and the implementation of industry best practices. Champion a higher bar for discoverability, usability, reliability, timeliness, consistency, validity, uniqueness, simplicity, completeness, integrity, security, and compliance of information and insights across the company. Provide technical leadership and mentorship, fostering a culture of continuous learning and growth. Minimum Qualifications 6+ years experience as an Analytics Engineer, Business Intelligence Engineer, Data Engineer, or similar role w/ proven track record of launching source of truth data marts. 6+ years of experience building and optimizing data pipelines, products, and solutions. Must be flexible to align with ocassional evening meetings in USA timezone. Extensive experience working with various data technologies and tools such as Airflow, Snowflake, Meltano, Fivetran, DBT, and AWS. Expert in SQL for data manipulation in a fast-paced work environment. Expert in creating compelling data stories using data visualization tools such as Looker, Tableau, Sigma, Domo, or PowerBI. Proven track record of having substantial impact across the company, as well as externally for the company, demonstrating your ability to drive positive change and achieve significant results. English fluency, excellent communication skills, and can present data-driven narratives in verbal, presentation, and written formats. Passion for working backwards from the Customer and empathy for business stakeholders. Experience shaping the strategic vision for data. Experience working with Agile and iterative development processes. Preferred Qualifications Strong proficiency in at least one data programming language such as Python or R. Experience working within and with data from business applications such as Salesforce, Clari, Gainsight, Workday, GitLab, Slack, or Freshservice. Proven track record of driving innovation, adopting emerging technologies and implementing industry best practices. Thrive on solving for ambiguous problem statements in an early-stage environment. Experience designing advanced data visualizations and data-rich interfaces in Figma or equivalent. Compensation Bands: Pune, India ₹3.7M – ₹4.6M Offers Equity Job Benefits: Health (medical, vision, dental), life, and disability insurance* Equity stock options Retirement plans Paid public holidays and unlimited PTO Paid maternity and parental leave Leaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act) Employee Assistance Program Flexible Work Stipend Eligibility may differ by country We're committed to building a global team! For certain roles outside the United States, U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR). Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check. HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws. This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time. For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position. Show more Show less

Posted 5 days ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design, develop, implement, test, and maintain automated test suites and frameworks for AI/ML pipelines. Collaborate closely with ML engineers and data scientists to understand model architectures and data workflows. Develop and execute test plans, test cases, and test scripts to identify software defects in AI/ML applications. Ensure end-to-end quality of AI/ML solutions, including data integrity, model performance, and system integration. Implement continuous integration and continuous deployment (CI/CD) processes for ML pipelines. Conduct performance and scalability testing for AI/ML systems. Document and track software defects using bug-tracking systems, and report issues to development teams. Participate in code reviews and provide feedback on testability and quality. Help foster a culture of quality and continuous improvement within the ML engineering group. Stay updated with the latest trends and best practices in AI/ML testing and quality assurance. Must Haves: Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in quality assurance, specifically testing AI/ML applications. Experience with the following: Strong programming skills in Python (experience with libraries like PyTest or unittest). Familiarity with machine learning frameworks (TensorFlow, PyTorch, or scikit-learn). Experience with test automation tools and frameworks. Knowledge of CI/CD tools (Jenkins, GitLab CI, or similar). Experience with containerization technologies like Docker and orchestration systems like Kubernetes. Proficient in Linux operating systems. Familiarity with version control systems like Git. Strong understanding of software testing methodologies and best practices. Excellent analytical and problem-solving skills. Excellent communication and collaboration skills. Bonus Attributes: Experience with testing data pipelines and ETL processes. Cloud platform experience; GCP, AWS or Azure are acceptable. Knowledge of big data technologies like Apache Spark, Kafka, or Airflow. Experience with performance testing tools. Understanding of data science concepts and statistical analysis. Certifications in software testing or cloud technologies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 5 days ago

Apply

7.0 - 8.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 8+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 5 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

5+ years experience in Software Engineering and 3+ years of experience with cloud-native architectures 2+ years implementing secure and compliant solutions for highly regulated environments 3+ years of experience with a container orchestration platform like Kubernetes, EKS, ECS, AKS or equivalent 2+ years of production system administration and infrastructure operations experience Excellence in Container architecture, design, ecosystem and/or development Experience with container-based CI/CD tools such as ArgoCD, Helm, CodeFresh, GitHub Actions, GitLab or equivalent Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Infrastructure Platform team, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job Responsibilities Guides and assists others in the areas of building appropriate level designs and gaining consensus from peers where appropriate Collaborates with other software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery pipelines Collaborates with other software engineers and teams to design, develop, test, and implement availability, reliability, scalability, and solutions in their applications Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Understands service level indicators and utilizes service level objectives to proactively resolve issues before they impact customers Demonstrated strong analytical skills to diagnose and resolve complex technical issues. Ability to perform root cause analysis and implement preventive measures. Experience in managing incidents and coordinating response efforts Has the ability to drive initiatives for process and system improvements. Supports the adoption of site reliability engineering best practices within your team Should complete SRE Bar Raiser Program Required Qualifications, Capabilities, And Skills Formal training or certification as Site Reliability Engineer in an enterprise infrastructure environment and 3+ years applied experience Proficient in site reliability culture and principles and familiarity with how to implement site reliability within an application or platform Proficient in at least one programming language such as Python, Java/Spring Boot, and .Net Proficient knowledge of software applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.) Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Familiarity with CI/CD pipelines and tools like Jenkins, GitLab CI, or CircleCI. Proficiency in scripting languages like Python. Experience with cloud platforms like AWS, Google Cloud, or Azure Understanding of infrastructure as code (IaC) using tools like Terraform or Ansible. Preferred Qualifications, Capabilities, And Skills Strong communication skills to collaborate with cross-functional teams. Skills in planning for future growth and scalability of systems Experience with Data Protection solutions such as Cohesity or Commvault About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Title: Senior Data Engineer – Data Quality, Ingestion & API Development Mandatory skill set - Python, Pyspark, AWS, Glue , Lambda, CI CD Total experience - 8+ Relevant experience - 8+ Work Location - Trivandrum /Kochi Candidates from Kerala and Tamil Nadu prefer more who are ready to relocate to above work locations. Candidates must be having an experience in lead role related to Data Engineer Job Overview We are seeking an experienced Senior Data Engineer to lead the development of a scalable data ingestion framework while ensuring high data quality and validation. The successful candidate will also be responsible for designing and implementing robust APIs for seamless data integration. This role is ideal for someone with deep expertise in building and managing big data pipelines using modern AWS-based technologies, and who is passionate about driving quality and efficiency in data processing systems. Key Responsibilities • Data Ingestion Framework: o Design & Development: Architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. o Framework Optimization: Use AWS services such as AWS Glue, Lambda, EMR, ECS , EC2 and Step Functions to build highly scalable, resilient, and automated data pipelines. • Data Quality & Validation: o Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. o Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. • API Development: o Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. o Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. • Collaboration & Agile Practices: o Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. o Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development (using tools like GitLab). Required Qualifications • Experience & Technical Skills: o Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. o Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. o AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. o Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. o API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. o CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. • Soft Skills: o Strong problem-solving abilities and attention to detail. o Excellent communication and interpersonal skills with the ability to work independently and collaboratively. o Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. • Experience with additional AWS services such as Kinesis, Firehose, and SQS. • Familiarity with data lakehouse architectures and modern data quality frameworks. • Prior experience in a role that required proactive data quality management and API- driven integrations in complex, multi-cluster environments. Candidate those who are Interested please drop your resume to: gigin.raj@greenbayit.com MOB NO - 8943011666 Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Kerala, India

On-site

Linkedin logo

🚀 We’re Hiring: Senior Data Engineer | Immediate Joiner 📍 Location: Kochi / Trivandrum | 💼 Experience: 10+ Years 🌙 Shift: US Overlapping Hours (till 10 PM IST) We are looking for a Senior Data Engineer / Associate Architect who thrives on solving complex data problems and leading scalable data infrastructure development. Must-Have Skillset: ✅ Python, PySpark ✅ AWS Glue, Lambda, Step Functions ✅ CI/CD (GitLab), API Development ✅ 5+ years hands-on AWS expertise ✅ Strong understanding of Data Quality, Validation & Monitoring Role Highlights: 🔹 Build & optimize AWS-based data ingestion frameworks 🔹 Implement high-performance APIs 🔹 Drive data quality & integrity 🔹 Collaborate across teams in Agile environments Nice to Have: ➕ Experience with Kinesis, Firehose, SQS ➕ Familiarity with Lakehouse architectures Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role Description Strong experience in design, installation, configuration, and troubleshooting of CI/CD pipelines and tools like Jenkins, GitLab CI, Bamboo, Ansible, Puppet, Chef, Docker, Kubernetes. Proficient scripting skills in Python, Shell, PowerShell, Groovy, Perl. Experience with Infrastructure Automation tools (Ansible, Puppet, Chef, Terraform). Experience managing repositories and migration automation (Git, BitBucket, GitHub, Clearcase). Experience with build automation tools (Maven, Ant). Artefact repository management experience (Nexus, Artifactory). Knowledge of cloud infrastructure configuration and migration (AWS, Azure, Google Cloud). Experience integrating CI/CD pipelines with code quality and test automation tools (SonarQube, Selenium, JUnit, NUnit). Skilled in containerization technologies (Docker, Kubernetes). Skills AWS,DevOps,CI/CD,Terraform Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description Position at Wind River Fullstack - MTS Why Choose Wind River? In a world increasingly driven by software innovation, Wind River is pioneering the technologies to accelerate the digital transformation of our customers with a new generation of Mission Critical AI Systems in an AI-first world with the most exacting standards for safety, security, performance, and reliability. Success will be determined by our ability to innovate with velocity and sell at the solutions level. Wind River’s impact spans critical infrastructure domains such as telecommunications, including 5G; industrial (automation, sustainable energy, robotics, mining), connected healthcare and medical devices, automotive (connected and self-driving vehicles), and aerospace & defense. We were recognized by VDC Research in July 2020 as #1 in Edge Compute OS Platforms, overtaking Microsoft as the overall commercial leader. Wind River regularly wins industry recognitions for excellence in IoT security, cloud and edge computing, as well as 8 consecutive years as a “Top Work Place”. If you’re passionate about amplifying your impact on the world, in a caring, respectful culture with a growth mindset, come join us and help lead the way into the future of the intelligent edge! About The Opportunity Wind River Systems is seeking an experienced high-performing DevSecOps software engineer for a position supporting a cloud-based application development team. The successful candidate will join a highly skilled development team delivering internal and external tools and technologies across a complete continuous testing platform providing support for test automation, pioneering many new industry leading capabilities. The successful candidate must have experience in cloud-native software development and be a highly adaptable team player who can quickly ramp up on new technologies and accomplish goals in a fast-paced agile environment. A combination of strong technical and communication skills is a must. Skills ABOUT YOU BSEE/BSCS or equivalent experience Strong knowledge of microservices architecture, design principles, and patterns. Solid experience in full stack development, including both frontend and backend technologies. Expertise in designing and developing RESTful APIs and integrating external services Proficiency in programming languages such as Python or Node.JS Strong experience with SQL, Database design, and DB migrations Strong experience with Git workflows Experience with frontend frameworks like Angular, Javascript and Typescript Strong Knowledge of CI/CD pipelines and related tools (e.g., Jenkins, GitLab CI/CD). Experience with Docker and Kubernetes Experience with cloud platforms such AWS, Google Cloud and Azure Benefits Workplace Flexibility: Hybrid Work. Medical insurance: Group Medical Insurance coverage. Additional shared cost medical benefit in the form of reimbursements. Employee Assistance Program. Vacation and Time off: Employees are eligible for various types of paid time off. Additional Time off’s – Birthday, Volunteer Time off, Wedding. Wellness Benefits through Unmind Carrot (Family -forming support) Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Overview Reporting to the Manager of Software Engineering, this position is a member of a small software group in Ametek’s India office. This group in the Indian office is part of a larger software team that includes software engineer(s) in NJ. This software team is responsible for the design, development and support of leading-edge software products that support our world-class Phantom Cameras. Primarily, this is a Graphical User Interface (GUI) product and a Software Development Kit (SDK) that controls and communicates with our cameras, downloads and views one or multiple cines (video files), and performs image processing, file transfers, file editing, etc. as required. The current GUI is written in C++, QT and QML, the legacy UI is written in C# and the SDK is written in C/C++. Job Responsibilities The Job Responsibilities Include, But Are Not Limited To The primary focus of this position will be in releasing and supporting the SDK using C/C++ and Microsoft Visual Studio and on the various desktop applications and libraries, mainly using Qt and QML. Other duties as assigned. Necessary Skills/Talents B.S. Degree (M.S. preferred) in Computer Science, Electrical Engineering, Computer Engineering or equivalent with 5-10 yrs of experience in software development. Dependable, driven, teachable person with good work ethic and is excited to learn and take on new challenges. Thorough understanding of C/C++ design and programming concepts. Experience with QT and QML Image processing & compression, OpenCL, GPU (CUDA), Windows Sockets, familiarity with Codecs, e.g. H.264, H.265 (Microsoft Media Foundation Encoder), DirectShow API, x264 and x265 codecs. Will be required to work a few hours 2 or 3 evenings a week to coordinate with NJ software team. Flexible, able to change priorities when given new directives for the greater good of the team. Committed to progress and comfortable with the occasional fluidity in hours, to ensure synchronicity between India and US teams. Strong verbal and written communication skills. Experience in troubleshooting, debugging and maintaining existing code. Excellent technical judgment and decision-making skills. Recognizes speed of execution as a competitive advantage for Vision Research and thus makes decisions and takes risks to support the rapid development of products and solutions Desirable Skills Experience with C# Experience on Linux and Mac OS is a Plus Gitlab, Git, CI/CD Vision Research is a Business Unit in the Materials Analysis Division of AMETEK, Inc. Vision Research manufactures industry leading high-speed digital cameras. Our cameras are primarily sold into industrial, academic, defense and government research facilities. We also have a smaller entertainment oriented camera business. Although not our primary focus, Vision Research has received both an Academy Award and an Emmy for our technical contribution to the entertainment industry. To learn more about Vision Research, Phantom cameras and to learn more about our imaging capabilities, please visit www.phantomhighspeed.com. AMETEK, Inc. is a leading global provider of industrial technology solutions serving a diverse set of attractive niche markets with annual sales over $7.0 billion. AMETEK is committed to making a safer, sustainable, and more productive world a reality. We use differentiated technology solutions to solve our customers’ most complex challenges. We employ 21,000 colleagues, in 35 countries, that are grounded by our core values: Ethics and Integrity, Respect for the Individual, Inclusion, Teamwork, and Social Responsibility. AMETEK (NYSE:AME) is a component of the S&P 500. Visit www.ametek.com for more information. Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Title: Quantitative Trading Consultant – Specialist Department: Technology Location: Mumbai (In-office) Budget: Up to ₹5,00,000 per month (₹60 LPA) Experience Required: 10+ Years Notice Period: Open to candidates currently serving notice Urgency: Immediate Requirement Company Type: Large-scale organization with substantial AUM and rapid growth Role Overview We are hiring a highly accomplished Quantitative Trading Consultant with deep expertise in building and running mid-frequency and low-frequency trading desks. This full-time specialist role demands a sharp, independent thinker with proven experience across the entire trading stack—from infrastructure setup to execution, risk, and compliance. You will work in a fast-paced, high-performance environment with direct access to senior leadership, contributing to a firm with strong market presence and sizable assets under management (AUM). Key Responsibilities Infrastructure Setup: Architect and implement scalable trading infrastructure—servers, execution gateways, and broker/exchange connectivity. Market Data Management Build and maintain real-time market data feeds using WebSocket APIs, ensuring minimal latency and robust data reliability. Strategy Development & Backtesting Create and enforce best practices for strategy research, backtesting, forward testing, and real-time deployment. Execution Systems Develop fault-tolerant, low-latency execution engines with embedded risk controls and efficient error handling. Risk Management Design real-time risk monitoring systems, enforce position and exposure limits, and ensure compliance with SEBI/NSE/BSE regulations. Monitoring & Alerting Deploy and maintain monitoring systems using Prometheus, Grafana, and ELK Stack for continuous visibility and alerts. Team Collaboration Liaise with quants, developers, analysts, and DevOps to ensure smooth trading operations and system integration. Compliance & Documentation Maintain detailed documentation of trading systems, workflows, risk controls, and regulatory compliance measures. Required Skills & Qualifications Deep understanding of quantitative trading strategies, financial markets, and market microstructure Proficient in Python, with working knowledge of C++ or Rust for performance-critical components Expertise in real-time data pipelines using Kafka, Redis, and experience with PostgreSQL, MongoDB, or TimescaleDB Familiarity with CI/CD pipelines, GitLab/GitHub, Docker, Kubernetes, and cloud platforms (AWS/GCP) Proven experience in WebSocket API integration and building latency-sensitive systems Strong analytical mindset, risk awareness, and problem-solving skills Sound understanding of Indian market compliance standards Preferred Experience Prior ownership or key contribution to a quant trading desk (mid-frequency or low-frequency) Experience in Indian equity, futures, and options markets Experience with algorithmic trading infrastructure and strategy deployment Reporting This role reports directly to senior management and works closely with the trading, tech, and risk leadership teams. Skills: kubernetes,websocket api,mongodb,quantitative trading strategies,cloud platforms,gcp,gitlab,aws,timescaledb,c++,market microstructure,redis,financial markets,kafka,risk management,ci/cd pipelines,api,indian market compliance standards,real-time data pipelines,backtesting,github,postgresql,python,docker,rust Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

TipsJob Title: Quantitative Trading Consultant – Operations & Trading Systems Location: Mumbai (In-office) Compensation: Up to ₹1,60,000 per month (₹10–20 LPA based on experience) Industry: Operations / Manufacturing / Production / Trading Type: Full-time | On-site Role Overview We are seeking a highly skilled and technically sound Quantitative Trading Consultant to lead the setup and execution of our mid-frequency and low-frequency trading desk. This role requires a deep understanding of trading infrastructure, execution systems, real-time data management, and risk control. You will be responsible for building the trading architecture from the ground up, collaborating with research and tech teams, and ensuring regulatory compliance in Indian financial markets. Key Responsibilities Infrastructure Setup: Design and implement end-to-end trading infrastructure: data servers, execution systems, broker/exchange connectivity. Real-Time Data Handling: Build and maintain real-time market data feeds using WebSocket APIs, ensuring minimal latency and high reliability. Strategy Development Framework: Establish frameworks and tools for backtesting, forward testing, and strategy deployment across multiple asset classes. Execution System Development: Develop low-latency, high-reliability execution code with robust risk and error-handling mechanisms. Risk Management: Design and implement real-time risk control systems, including position sizing, exposure monitoring, and compliance with SEBI/NSE/BSE regulations. Monitoring & Alerting: Set up systems using Prometheus, Grafana, and ELK stack for monitoring, logging, and proactive issue alerts. Team Collaboration: Work closely with quant researchers, DevOps, developers, and analysts to ensure smooth desk operations. Documentation & Compliance: Maintain detailed documentation of all infrastructure, workflows, trading protocols, and risk procedures. Ensure adherence to relevant regulatory guidelines. Required Skills & Qualifications Expert knowledge of quantitative trading, market microstructure, and execution strategy. Strong programming skills in Python, with working knowledge of C++ or Rust for performance-critical modules. Hands-on experience with WebSocket API integration, Kafka, Redis, and PostgreSQL/TimescaleDB/MongoDB. Familiarity with CI/CD tools, GitHub/GitLab, Docker, Kubernetes, and AWS/GCP cloud environments. Sound understanding of risk management frameworks and compliance in Indian markets. Excellent problem-solving and analytical thinking abilities. Strong attention to detail, documentation, and process adherence. Preferred Experience Previous experience in setting up or managing a quantitative trading desk (mid-frequency or low-frequency). Hands-on exposure to Indian equities, futures, and options markets. Experience working in a high-growth, fast-paced trading or hedge fund environment. Reporting Structure This role reports directly to senior management and works cross-functionally with technology, trading, and risk management teams. Why Join Us Opportunity to build and lead the trading infrastructure from the ground up. Work in a high-growth company with a strong focus on innovation and technology. Collaborate with top talent across trading, development, and research. Gain exposure to cutting-edge trading tools and modern cloud-native infrastructure. Skills: quantitative trading,attention to detail,python,c++,risk management,redis,problem-solving,rust,execution strategy,gitlab,kafka,docker,github,analytical thinking,market microstructure,mongodb,postgresql,ci/cd,aws,kubernetes,regulatory compliance,websocket api,gcp,timescaledb,monitoring,api,alerting Show more Show less

Posted 5 days ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the GCP/Azure Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and Azure platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and Azure services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education And Experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills And Behavioral Competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less

Posted 5 days ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the AWS Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and AWS platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and AWS services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education and experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills and behavioral competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

Linkedin logo

Job Summary We are looking for an experienced DevOps Lead to join our technology team and drive the design, implementation, and optimization of our DevOps processes and infrastructure. You will lead a team of engineers to ensure smooth CI/CD workflows, scalable cloud environments, and high availability for all deployed applications. This is a hands-on leadership role requiring a strong technical foundation and a collaborative mindset. Key Responsibilities Lead the DevOps team and define best practices for CI/CD pipelines, release management, and infrastructure automation. Design, implement, and maintain scalable infrastructure using tools such as Terraform, CloudFormation, or Ansible. Manage and optimize cloud services (e.g., AWS, Azure, GCP) for cost, performance, and security. Oversee monitoring, alerting, and logging systems (e.g., Prometheus, Grafana, ELK, Datadog). Implement and enforce security, compliance, and governance policies in cloud environments. Collaborate with development, QA, and product teams to ensure reliable and efficient software delivery. Lead incident response and root cause analysis for production issues. Evaluate new technologies and tools to improve system efficiency and reliability. Required Qualifications Bachelor's or master's degree in computer science, Engineering, or related field. 5+ years of experience in DevOps or SRE roles, with at least 2 years in a lead or managerial capacity. Strong experience with CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI/CD). Expertise in infrastructure as code (IaC) and configuration management. Proficiency in scripting languages (e.g., Python, Bash). Deep knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Experience with version control (Git), artifact repositories, and deployment strategies (blue/green, canary). Solid understanding of networking, DNS, firewalls, and security protocols. Preferred Qualifications Certifications (e.g., Azure Certified DevOps Engineer, CKA/CKAD). Experience in a regulated environment (e.g., HIPAA, PCI, SOC2). Exposure to observability platforms and chaos engineering practices. Background in agile/scrum Skills : Strong leadership and team-building capabilities. Excellent problem-solving and troubleshooting skills. Clear and effective communication, both written and verbal. Ability to work under pressure and adapt quickly in a fast-paced environment. (ref:hirist.tech) Show more Show less

Posted 5 days ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Company: Our Client is a leading Indian multinational IT services and consulting firm. It provides digital transformation, cloud computing, data analytics, enterprise application integration, infrastructure management, and application development services. The company caters to over 700 clients across various industries, including banking and financial services, manufacturing, technology, media, retail, and travel and hospitality. Its industry-specific solutions are designed to address complex business challenges by combining domain expertise with deep technical capabilities. With a global workforce of over 80,000 professionals and a presence in more than 50 countries. Job Title: Python Developer Locations: PAN INDIA Experience: 5-10 Years (Relevant) Employment Type: Contract to Hire Work Mode: Work From Office Notice Period: Immediate to 15 Days JOB DESCRIPTION: Cloud Computing Proficiency in cloud platforms such as AWS, Google Cloud or Azure Containerization Experience with Docker and Kubernetes for container orchestration CICD Strong knowledge of continuous integration and continuous delivery processes using tools like Jenkins, GitLab CI, or Azure DevOps Infrastructure as Code IaC Experience with IaC tools such as Terraform or CloudFormation Scripting and Programming Proficiency in scripting languages, eg, Python, Bash, and programming languages, eg, Java, Go Monitoring and Logging Familiarity with monitoring tools eg, Prometheus, Grafana, and logging tools, eg, ELK stack Security Knowledge of security best practices and tools for securing platforms and data Networking: Understanding of networking concepts and technologies Database Management Experience with both SQL and No-SQL databases Automation Proficiency in automation tools and frameworks Version Control Strong knowledge of version control systems like Git Development Understanding, Solid understanding of the software development life cycle (SDLC), and experience working closely with development teams Mandatory Skills: Azure API Management, Azure BLOB, Azure Cloud Architecture, Azure Container Apps, Azure Cosmos DB, Azure DevOps, Azure Event Grid, Azure Functions, Azure IOT, Docker, Kubernetes Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for AWS DevOps Engineers who will be able exhibit their technology skills & influence team members to increase the overall productivity & effectiveness by sharing your in-depth knowledge on Cloud & DevOps. You will be leading and contributing to the define the industry standard best practices of Cloud & DevOps to the team to follow. This would be an outstanding opportunity for you to apply and enhance your both technical and people management skills, thereby adding value to the business and operations of CloudifyOps What will you do : Own the quality of Cloud and DevOps architecture, design, and delivery of the client engagement You would be designing the automation of the Cloud & DevOps operations/process with proper tools. Advise on implementing Cloud & DevOps best practices and provide architectural and design support to the team. Functionally split the complex issues or problems into much simpler and straightforward solutions for the engineers in the team to execute. To assist in the technical and design meetings with clients to help them in adoption of Cloud & DevOps tools and technologies practices. Take ownership of the end-to-end development & implementation quality of the team by managing dependencies and focusing on technology best practices. What we are looking for : 4+ years of hands-on experience as a DevOps & Cloud A person who has experience in assisting/helping to architect, design and develop Cloud & DevOps practices on AWS cloud platform Must be an expert in implementing CI & CD pipeline with various DevOps tool sets. Should be able to evaluate, implement, and streamline the DevOps practices for the clients thereby speeding up the software development and deployment process Hands-on experience with CI & CD tools(like Jenkins, SonarQube, Artifactory/Nexus), Source Control (like Git- Bitbucket, Github, Gitlab/SVN etc.) Experience in creating continuous delivery practices by using Terraform/ARM Templates/Cloud Formation. Must possess excellent knowledge of infrastructure configuration & automation tools (like Ansible) in both development and production environments Should possess the experience in designing & building applications using container & serverless technologies. Kubernetes expertise is a must have and knowledge on Service Mesh, Tracing, Helm Charts etc would be an added advantage. Must have strong expertise in operating Linux environments and scripting languages like Shell. Good presentation and communications skills. Be willing to be challenged and learn new skills. Finally, and most importantly to lead and grow your team by example Show more Show less

Posted 5 days ago

Apply

7.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Quality Assurance (QA) Lead Position Overview We are seeking an experienced Quality Assurance Lead to join our engineering team. The ideal candidate will be responsible for developing and implementing quality assurance processes, leading a team of QA engineers, and ensuring the delivery of high-quality software products. This role combines technical expertise with leadership skills to drive quality initiatives across multiple projects. Key Responsibilities Lead and mentor a team of QA engineers, providing technical guidance and career development support Develop and maintain QA processes, methodologies, and best practices across projects Create and execute comprehensive test strategies, including manual and automated testing approaches Collaborate with development teams to integrate quality assurance throughout the software development lifecycle Establish quality metrics and KPIs to measure and improve testing effectiveness Perform risk assessment and implement mitigation strategies for complex projects Review and approve test plans, test cases, and test results Coordinate with product managers and stakeholders to understand requirements and quality expectations Drive continuous improvement initiatives in testing processes and tools Manage resource allocation and capacity planning for QA activities Technical Skills Test Automation: Cucumber, Playwright and Selenium, or similar frameworks API Testing: Postman, REST Assured, or similar tools Performance Testing: K6, JMeter CI/CD Tools: Jenkins, GitLab CI, or similar Bug Tracking: Jira, Azure DevOps, or similar Programming/Scripting: Python, Java, or JavaScript Knowledge in testing generative AI based applications Database Knowledge: SQL, MongoDB Version Control: Git Soft Skills Excellent communication and interpersonal skills Strategic thinking and problem-solving capabilities Ability to work under pressure and manage multiple priorities Strong analytical and organizational skills Stakeholder management experience Conflict resolution abilities Required Qualifications Bachelor’s degree in computer science, Engineering, or related field 7 to 10 years of software testing experience, with at least 3 years in a lead role Strong experience with test automation frameworks and tools Proficient in test management tools and defect tracking systems Experience with Agile methodologies and CI/CD practices Strong understanding of web technologies, APIs, and mobile applications Experience with performance testing and security testing Proven track record of leading and mentoring QA teams Preferred Qualifications Professional certifications (ISTQB, PMP, or similar) Experience with cloud platforms (AWS, Azure, or GCP) Knowledge of security testing tools and methodologies Background in development or DevOps Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Join us as a Full Stack Developer at Barclays, where you will spearhead the evolution of our infrastructure and deployment pipelines, driving innovation and operational excellence. You will harness cutting-edge technology to build and manage robust, scalable and secure infrastructure, ensuring seamless delivery of our digital solutions. To be successful as a Full Stack Developer you should have experience with: Java, Spring, Springboot, HTML5/ CSS / JavaScript, Corporate experience in React, ES6 and Typescript Gradle, Spring batch Experience with Unit Testing front-end components (Ideally Jest), Junit, Mockito Spring config server, NodeJs Some Other Highly Valued Skills May Include CI/CD e.g GITLAB Containerisation e.g Docker Openshift, Open API Specification Experience / knowledge with Spotify Backstage You may be assessed on key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen, strategic thinking and digital and technology, as well as job-specific technical skills. The role is based out of Pune. Purpose of the role To design, develop and improve software, utilising various engineering methodologies, that provides business, platform, and technology capabilities for our customers and colleagues. Accountabilities Development and delivery of high-quality software solutions by using industry aligned programming languages, frameworks, and tools. Ensuring that code is scalable, maintainable, and optimized for performance. Cross-functional collaboration with product managers, designers, and other engineers to define software requirements, devise solution strategies, and ensure seamless integration and alignment with business objectives. Collaboration with peers, participate in code reviews, and promote a culture of code quality and knowledge sharing. Stay informed of industry technology trends and innovations and actively contribute to the organization’s technology communities to foster a culture of technical excellence and growth. Adherence to secure coding practices to mitigate vulnerabilities, protect sensitive data, and ensure secure software solutions. Implementation of effective unit testing practices to ensure proper code design, readability, and reliability. Analyst Expectations To meet the needs of stakeholders/ customers through specialist advice and support Perform prescribed activities in a timely manner and to a high standard which will impact both the role itself and surrounding roles. Likely to have responsibility for specific processes within a team They may lead and supervise a team, guiding and supporting professional development, allocating work requirements and coordinating team resources. They supervise a team, allocate work requirements and coordinate team resources. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others. OR for an individual contributor, they manage own workload, take responsibility for the implementation of systems and processes within own work area and participate on projects broader than direct team. Execute work requirements as identified in processes and procedures, collaborating with and impacting on the work of closely related teams. Check work of colleagues within team to meet internal and stakeholder requirements. Provide specialist advice and support pertaining to own work area. Take ownership for managing risk and strengthening controls in relation to the work you own or contribute to. Deliver your work and areas of responsibility in line with relevant rules, regulation and codes of conduct. Maintain and continually build an understanding of how all teams in area contribute to the objectives of the broader sub-function, delivering impact on the work of collaborating teams. Continually develop awareness of the underlying principles and concepts on which the work within the area of responsibility is based, building upon administrative / operational expertise. Make judgements based on practise and previous experience. Assess the validity and applicability of previous or similar experiences and evaluate options under circumstances that are not covered by procedures. Communicate sensitive or difficult information to customers in areas related specifically to customer advice or day to day administrative requirements. Build relationships with stakeholders/ customers to identify and address their needs. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave. Show more Show less

Posted 5 days ago

Apply

0.0 - 2.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

The Role As a DevOps Engineer , you will be an integral part of the product and service division, working closely with development teams to ensure seamless deployment, scalability, and reliability of our infrastructure. You'll help build and maintain CI/CD pipelines, manage cloud infrastructure, and contribute to system automation. Your work will directly impact the performance and uptime of our flagship product, BotPenguin. What you need for this role Education: Bachelor's degree in Computer Science, IT, or a related field. Experience: 2-5 years in DevOps or similar roles. Technical Skills: Proficiency in CI/CD tools like Jenkins, GitLab CI, or GitHub Actions. Experience with containerization and orchestration using Docker and Kubernetes. Strong understanding of cloud platforms, especially AWS & Azure. Familiarity with infrastructure as code tools such as Terraform or CloudFormation. Knowledge of monitoring and logging tools like Prometheus, Grafana, and ELK Stack. Good scripting skills in Bash, Python, or similar languages. Soft Skills: Detail-oriented with a focus on automation and efficiency. Strong problem-solving abilities and proactive mindset. Effective communication and collaboration skills. What you will be doing Build, maintain, and optimize CI/CD pipelines. Monitor and improve system performance, uptime, and scalability. Manage and automate cloud infrastructure deployments. Work closely with developers to support release processes and environments. Implement security best practices in deployment and infrastructure management. Ensure high availability and reliability of services. Document procedures and provide support for technical troubleshooting. Contribute to training junior team members, and assist HR and operations teams with tech-related concerns as required. Top reasons to work with us Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: DevOps: 2 years (Required) Work Location: In person Speak with the employer +91 8319533183

Posted 5 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position - Performance Tester (AWS) Experience- 5 years NoticePeriod : Immediate Joiners to 10 days Location -Pune Job Summary As an SRE Performance Tester, you will play a crucial role in maintaining and improving the performance, scalability, and reliability of applications hosted on AWS. You will be responsible for designing and executing performance tests across AWS infrastructure, specifically focusing on EKS (Elastic Kubernetes Service) and Lambda functions. Your insights will help optimize systems and deliver exceptional performance to users. Key Responsibilities: - Design, develop, and execute performance test plans for applications running on AWS. Analyze performance test results to identify bottlenecks and areas for improvement. Collaborate with development teams to understand application architecture and gather requirements for performance testing. Monitor application performance in real-time and provide actionable recommendations based on data analysis. Create automated performance testing scripts and integrate them into CI/CD pipelines. Work with infrastructure teams to ensure optimal resource allocation and configuration for high performance. Document testing processes, results, and recommendations clearly for stakeholders. Stay up to date with industry trends and best practices in performance testing and cloud infrastructure. Proven experience as a Performance Tester with a focus on AWS services. Strong knowledge of AWS services including EKS, Lambda, EC2, RDS, etc. Experience with performance testing tools (e.g., JMeter, Gatling, LoadRunner). Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI) and Agile methodologies. Nice to have experience in SRE (preferred) - Excellent analytical skills with the ability to troubleshoot complex issues. Strong communication skills to collaborate effectively with cross-functional teams. B.) Other Information Educational Qualifications Bachelor's or master's degree in Computer Science, Engineering, or related field Show more Show less

Posted 5 days ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Bangalore,Karnataka,India Job ID 767284 Join our Team About this opportunity: We are seeking a highly motivated and detail-oriented Experienced Cloud Engineer to join our dynamic software DevOps team. You should be a curious professional, eager to grow, and an excellent team player! As a Cloud Engineer, you will work closely with our r-Apps DevOps team to gain exposure to cloud native infrastructure, automation, and optimization tasks. You will support the implementation and maintenance of CI-CDD, Deployments, helm, Security aspects of cloud native applications/environments, assist with troubleshooting and contribute to the SaaS/AaaS based Microservice solutions development team. What you will do: AWS Cloud: Experience with AWS Cloud pipelines and AWS CloudFormation (IaC). Kubernetes & Helm: Kubernetes administration & Cloud native application packaging/management using Helm charts. CI-CDD: Design and implement CI-CDD using Jenkins & spinnaker Automation & Scripting: Develop and maintain scripts to automate routine tasks using technologies such as Ansible, Python, and Shell scripting. Monitoring & Optimization: Monitor microservice resources for performance, availability. Assist in optimizing environments to enhance performance. Troubleshooting: Troubleshoot and resolve issues within AaaS applications, focusing on resource failures, performance degradation, and connectivity disruptions. Documentation: Assist in documenting DevOps infrastructure setups, processes, and workflows, and help maintain knowledge base articles. Learning & Development: Continuously expand your knowledge of cloud technologies and cloud architecture, stay updated on the latest trends in cloud computing. You will bring: Bachelor/ master’s degree in computer science, Software Engineering, or related field Experience of cloud platforms like AWS. Proficiency in containerization and orchestration using Docker and Kubernetes. Proficient in using Helm for managing Kubernetes applications, including creating and deploying Helm charts. Experience in CICD tools like Jenkins, Spinnaker, Gitlab. Experience with monitoring tools such as Prometheus, Grafana. Implement and manage security tools for CI/CD pipelines, cloud environments, and containerized applications. Experience of scripting and automation (e.g., Python, Bash, Ansible). Strong problem-solving skills and the ability to troubleshoot cloud native infrastructure. Good communication skills and the ability to work effectively in a team environment. Eagerness to learn new technologies and contribute to cloud native applications. Understanding of the software development lifecycle (SDLC) and agile methodologies Preferred qualifications: Certifications / Hands-on experience with AWS. Exposure to AI services for DevOps. Predictive analysis on Monitoring of AaaS applications. Design and enforce security best practices across the entire DevOps lifecycle. Familiarity with industry security standards and frameworks (e.g., CIS, NIST, OWASP). Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 5 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Job Role : Computer and Information Systems Managers For Workflow Annotation Specialist Project Type: Contract-based / Freelance / Part-time – 1 Month Job Overview: We are seeking domain experts to participate in a Workflow Annotation Project . The role involves documenting and annotating the step-by-step workflows of key tasks within the candidate’s area of expertise. The goal is to capture real-world processes in a structured format for AI training and process optimization purposes. Domain Expertise Required :  Plan and deliver IT projects on time and within scope  Supervise technical and project staff  Oversee IT infrastructure and operations  Enforce information security policies and protocols  Manage vendor contracts and service agreements  Align technology strategy with overall business objectives . Tools & Technologies You May have Worked: Project & task management: Jira, Microsoft Project, Smartsheet Monitoring & analytics: Datadog, Splunk Security tools: Nessus, Qualys Service management: ServiceNow, Zendesk Cloud platforms: AWS Console, Azure Portal, Google Cloud Console Enterprise systems: SAP, Oracle ERP Collaboration tools: Slack, Microsoft Teams Open Source / Free Software Experience Project management: OpenProject, Taiga, Kanboard Monitoring & visualization: Zabbix, Prometheus + Grafana Security tools: OpenVAS Version control & DevOps: GitLab Community Edition (CE) Collaboration & support: Rocket.Chat, osTicket ERP systems: Odoo Community Edition Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

GitLab is an open core software company that develops the most comprehensive AI-powered DevSecOps Platform, used by more than 100,000 organizations. Our mission is to enable everyone to contribute to and co-create the software that powers our world. When everyone can contribute, consumers become contributors, significantly accelerating the rate of human progress. This mission is integral to our culture, influencing how we hire, build products, and lead our industry. We make this possible at GitLab by running our operations on our product and staying aligned with our values. Learn more about Life at GitLab. Thanks to products like Duo Enterprise, and Duo Workflow, customers get the benefit of AI at every stage of the SDLC. The same principles built into our products are reflected in how our team works: we embrace AI as a core productivity multiplier. All team members are encouraged and expected to incorporate AI into their daily workflows to drive efficiency, innovation, and impact across our global organisation. An Overview Of This Role As the Engineering Manager for GitLab Dedicated, you’ll lead a high-performing, globally distributed team focused on delivering a secure, scalable, and reliable Dedicated SaaS offering. Your mission is to create an environment where engineers can thrive and deliver meaningful impact. You’ll work closely with a Product Manager to align business needs with technical execution, applying strong engineering practices to evolve GitLab Dedicated. This includes supporting new feature enablement, scaling efforts, compliance needs, and process automation. You’ll also participate in the incident escalation rotation to help meet our service availability and SLA commitments. GitLab Dedicated teams own the full lifecycle of the Dedicated SaaS service—from platform evolution to customer enablement—with a focus on performance, reliability, and continuous improvement. What You Will Do Lead the newly formed Environment Automation team within the Dedicated Group, in partnership with the team’s Product Manager. Support a team of engineers through clear direction, meaningful feedback, and an environment where they can thrive and grow. Contribute to the availability, security, and scalability of GitLab Dedicated by enabling automation and sound engineering practices. Foster an inclusive and collaborative team culture grounded in GitLab’s values. Recruit, onboard, and develop engineers with diverse backgrounds and experiences. Guide Agile project delivery, ensuring alignment with team goals and organizational priorities. Continuously improve product quality, performance, and security in close collaboration with cross-functional teams. What You Will Bring Experience leading teams in SaaS, Infrastructure, Site Reliability, or similar domains, with a focus on single or multi-tenant systems. Ability to break down technical topics into accessible, business-aligned language for a range of stakeholders. Demonstrated experience supporting engineers' growth through mentorship, coaching, and equitable development practices. Familiarity with operating and scaling cloud-native platforms and production environments. Experience navigating incident response and collaborating across teams to resolve outages. A thoughtful and inclusive leadership approach to team building and cross-functional collaboration. A track record of delivering results through shared ownership and clear communication. About The GitLab Dedicated Team The GitLab Dedicated team's mission is to deliver a fully managed, single-tenant GitLab environment through the GitLab Dedicated platform. This aims to minimize manual interactions with customer tenant installations and allow customers to focus on leveraging the full potential of the One DevOps Platform. The team is dedicated to ensuring the reliability, scalability, performance, and security of GitLab Dedicated and its supporting services. They strive to develop automated and scalable solutions utilizing GitLab features and cloud vendor managed products to reduce complexity, enhance efficiency, and accelerate the delivery of new capabilities. How GitLab Will Support You Benefits to support your health, finances, and well-being All remote, asynchronous work environment Flexible Paid Time Off Team Member Resource Groups Equity Compensation & Employee Stock Purchase Plan Growth and development budget Parental leave Home office support Please note that we welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement. Additionally, studies have shown that people from underrepresented groups are less likely to apply to a job unless they meet every single qualification. If you're excited about this role, please apply and allow our recruiters to assess your application. Remote-Global The base salary range for this role’s listed level is currently for residents of listed locations only. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, and alignment with market data. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary. California/Colorado/Hawaii/New Jersey/New York/Washington/DC/Illinois/Minnesota pay range $142,800—$306,000 USD Country Hiring Guidelines: GitLab hires new team members in countries around the world. All of our roles are remote, however some roles may carry specific location-based eligibility requirements. Our Talent Acquisition team can help answer any questions about location after starting the recruiting process. Privacy Policy: Please review our Recruitment Privacy Policy. Your privacy is important to us. GitLab is proud to be an equal opportunity workplace and is an affirmative action employer. GitLab’s policies and practices relating to recruitment, employment, career development and advancement, promotion, and retirement are based solely on merit, regardless of race, color, religion, ancestry, sex (including pregnancy, lactation, sexual orientation, gender identity, or gender expression), national origin, age, citizenship, marital status, mental or physical disability, genetic information (including family medical history), discharge status from the military, protected veteran status (which includes disabled veterans, recently separated veterans, active duty wartime or campaign badge veterans, and Armed Forces service medal veterans), or any other basis protected by law. GitLab will not tolerate discrimination or harassment based on any of these characteristics. See also GitLab’s EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know during the recruiting process. Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Haveli, Maharashtra, India

On-site

Linkedin logo

We use cookies to offer you the best possible website experience. Your cookie preferences will be stored in your browser’s local storage. This includes cookies necessary for the website's operation. Additionally, you can freely decide and change any time whether you accept cookies or choose to opt out of cookies to improve website's performance, as well as cookies used to display content tailored to your interests. Your experience of the site and the services we are able to offer may be impacted if you do not accept all cookies. Press Tab to Move to Skip to Content Link Skip to main content Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook Search by Keyword Search by Location Home Page Home Page Life At YASH Core Values Careers Business Consulting Jobs Digital Jobs ERP IT Infrastructure Jobs Sales & Marketing Jobs Software Development Jobs Solution Architects Jobs Join Our Talent Community Social Media LinkedIn Twitter Instagram Facebook View Profile Employee Login Search by Keyword Search by Location Show More Options Loading... Requisition ID All Skills All Select How Often (in Days) To Receive An Alert: Create Alert Select How Often (in Days) To Receive An Alert: Apply now » Apply Now Start apply with LinkedIn Please wait... Automation Engineer Job Date: May 16, 2025 Job Requisition Id: 61017 Location: Pune, IN Pune, MH, IN YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire Selenium Professionals in the following areas : Job Description: RRF Name - Automation Engineer Job Summary: We are seeking a skilled Automation Specialist with 8+ years of hands-on experience in designing and implementing automation frameworks. The ideal candidate will have deep expertise in Python, Selenium, backend and API automation, and a strong grounding in object-oriented programming. Proficiency with AWS Cloud services and automation on cloud infrastructure is essential. Exposure to Agile environments is required, while experience or interest in AI/ML, MLOps, or RPA tools is a strong plus. Key Responsibilities: Design, develop, and maintain robust and scalable automation frameworks. Build and enhance automation for frontend, backend, and API layers. Develop cloud-based automation solutions and optimize workflows on AWS cloud services. Collaborate with cross-functional Agile teams to define automation strategies and integrate automation into CI/CD pipelines. Write clear, maintainable, and efficient automation scripts using Python and Selenium. Use Shell scripting to automate system-level tasks and support test environments. Contribute to the continuous improvement of automation practices, coding standards, and development processes. Conduct code reviews and mentor junior automation engineers. Must-Have Skills: Strong hands-on experience with automation frameworks (custom or open-source). Expertise in Python programming. Proficient in Selenium WebDriver for UI automation. Solid understanding and hands-on experience with backend automation and API testing (e.g., REST, Postman, Swagger, etc.). Strong Shell scripting skills for task automation and system operations. Excellent understanding of OOP concepts and design patterns. Proficiency in AWS cloud services (e.g., EC2, Lambda, S3, CloudWatch, etc.) and automation on cloud infrastructure. Experience working in Agile/Scrum teams with familiarity in Agile testing practices. Good understanding of CI/CD tools and processes (e.g., Jenkins, GitLab, etc.). Nice To Have: Exposure to AI/ML workflows and MLOps concepts. Experience with Robotic Process Automation (RPA) tools (e.g., UiPath, Automation Anywhere). Familiarity with Groovy scripting, especially in Jenkins pipelines. Soft Skills: Strong analytical and problem-solving skills. Excellent communication and teamwork abilities. Self-motivated with a passion for quality, learning, and innovation. At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Apply now » Apply Now Start apply with LinkedIn Please wait... Find Similar Jobs: Careers Home View All Jobs Top Jobs Quick Links Blogs Events Webinars Media Contact Contact Us Copyright © 2020. YASH Technologies. All Rights Reserved. Show more Show less

Posted 5 days ago

Apply

Exploring GitLab Jobs in India

GitLab is a popular DevOps platform that is widely used by companies in India for version control, collaboration, and CI/CD automation. As more and more organizations adopt DevOps practices, the demand for GitLab professionals in India is on the rise. Job seekers with GitLab skills can explore a wide range of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore - Known as the Silicon Valley of India, Bangalore has a thriving tech industry with many companies actively hiring for GitLab roles.
  2. Pune - Another major IT hub in India, Pune offers plenty of opportunities for GitLab professionals.
  3. Hyderabad - With a growing tech scene, Hyderabad is a great place to look for GitLab jobs.
  4. Chennai - The capital city of Tamil Nadu is home to many IT companies that are in need of GitLab experts.
  5. Gurgaon - Located near the national capital, Gurgaon is a hub for IT and finance companies that frequently hire GitLab professionals.

Average Salary Range

The average salary range for GitLab professionals in India varies based on experience and location. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the GitLab job market in India, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, DevOps Engineer or DevOps Manager.

Related Skills

In addition to GitLab expertise, employers in India often look for professionals with skills in CI/CD pipelines, Docker, Kubernetes, Jenkins, AWS/Azure/GCP, and scripting languages like Python or Shell scripting.

Interview Questions

  • What is GitLab and how does it differ from other version control systems? (basic)
  • How would you set up a CI/CD pipeline using GitLab CI? (medium)
  • Can you explain the difference between a Git commit and a Git push? (basic)
  • What is a GitLab runner and how does it work? (medium)
  • How do you handle merge conflicts in GitLab? (medium)
  • Explain the purpose of a .gitignore file in a Git repository. (basic)
  • How would you integrate GitLab with Kubernetes for deploying applications? (advanced)
  • What are Git hooks and how can they be useful in a GitLab workflow? (medium)
  • Describe how GitLab handles branching and merging of code changes. (medium)
  • What security measures would you implement to secure a GitLab repository? (advanced)
  • How do you revert a commit in GitLab? (basic)
  • Explain the difference between GitLab CE and GitLab EE. (basic)
  • How would you troubleshoot a failing GitLab pipeline? (medium)
  • What is GitLab Pages and how can it be used for hosting websites? (medium)
  • Describe the purpose of GitLab artifacts and how they are used in CI/CD pipelines. (medium)
  • How do you manage permissions and access control in a GitLab repository? (medium)
  • What are Git submodules and when would you use them in a GitLab project? (medium)
  • Explain the advantages of using GitLab over other version control systems like SVN. (basic)
  • How do you handle large binary files in a GitLab repository? (medium)
  • What is GitLab's built-in issue tracking system and how does it integrate with the Git workflow? (medium)
  • Describe the process of forking a repository in GitLab. (basic)
  • How do you use GitLab's code review feature to collaborate with team members? (medium)
  • What is GitLab's Auto DevOps feature and how does it simplify the CI/CD process? (medium)
  • How would you monitor the performance of a GitLab CI/CD pipeline? (medium)
  • Can you explain the concept of GitLab's "Merge Requests" and how they facilitate code collaboration? (medium)

Closing Remark

As the demand for GitLab professionals continues to grow in India, now is a great time for job seekers to enhance their skills and apply confidently for exciting opportunities in the field. Prepare well, showcase your expertise, and seize the GitLab job that aligns with your career goals. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies