Home
Jobs

4039 Gitlab Jobs - Page 28

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

2 - 5 Lacs

Ahmedabad

On-site

GlassDoor logo

Unlock Your Potential With IGNEK Welcome to IGNEK, where we combine innovation and passion! We want our workplace to help you grow professionally and appreciate the special things each person brings. Come with us as we use advanced technology to make a positive difference. At IGNEK, we know our success comes from our team’s talent and hard work. Celebrate Successes Harness Your Skills Experience Growth Together Work, Learn, Celebrate Appreciate Unique Contributions Get Started Culture & Values Our Culture & values guide our actions and define our principles. Growth Learn and grow with us. We’re committed to providing opportunities for you to excel and expand your horizons. Transparency We are very transparent in terms of work, culture and communication to build trust and strong bonding among employees, teams and managers. People First Our success is all about our people. We care about your well-being and value diversity in our inclusive workplace. Be a team Team Work is our strength. Embrace a “Be a Team” mindset, valuing collective success over individual triumphs. Together, we can overcome challenges and reach new heights. Perks & Benefits Competitive flexibility and comprehensive benefits prioritize your well-being. Creative programs, professional development, and a vibrant work-life balance ensure your success is our success. 5 Days Working Festival Celebration Rewards & Benefits Certification Program Skills Improvement Referral Program Friendly Work Culture Training & Development Enterprise Projects Leave Carry Forward Yearly Trip Hybrid Work Fun Activities Indoor | Outdoor Flexible Timing Reliable Growth Team Lunch Stay Happy Opportunity Work Life balance What Makes You Different? BE Authentic Stay true to yourself, it’s what sets you apart BE Proactive Take charge of your work, don’t wait for things to happen BE A Learner Keep an open mind and never stop seeking knowledge BE Professional Approach every task with diligence and integrity BE Innovative Think outside the box and push boundaries BE Passionate Let your enthusiasm light the path to success Python Backend Developer Technology: Python Job Type: Full Time Job Location: Ahmedabad Experience: 7+ Years Location: Ahmedabad (On-site) Overview: We are seeking a highly skilled Python Backend Developer with a strong foundation in building scalable, cloud-native microservices and business logic-driven systems. You will play a key role in delivering backend solutions on AWS, leveraging modern development tools and practices to build robust, enterprise-grade services. Key Responsibilities: Design, develop, and maintain scalable RESTful APIs and microservices using Python. Lead the end-to-end implementation of backend systems on AWS Cloud. Modernize and migrate legacy systems to cloud-native architectures. Integrate with relational databases (Postgres RDS, Oracle) and graph databases. Collaborate with tech leads and stakeholders to translate requirements into scalable backend solutions. Conduct unit, integration, and functional testing to ensure high reliability and performance. Follow SDLC best practices, and deploy code using CI/CD automation pipelines. Use orchestration tools like Apache Airflow to streamline backend workflows. Ensure solutions meet security, performance, and scalability standards. Stay current with emerging technologies and best practices in backend and cloud development. Required Skills & Experience: 7+ years of backend development experience with a strong focus on Python. Proficient in Python for service development and data integration tasks. Additional experience with Java, PL/SQL, and Bash scripting is a plus. Strong hands-on experience with AWS services: EC2, Lambda, RDS, S3, IAM, API Gateway, Kinesis. Expertise in PostgreSQL or Oracle, including stored procedures and performance tuning. Solid understanding of microservices architecture and serverless computing. Familiarity with Elasticsearch or other search platforms. Practical experience with CI/CD tools: GitLab, GitHub, AWS CodePipeline. Experience with Terraform, Docker, and cloud infrastructure setup and management. Preferred Qualifications: AWS Certification (e.g., AWS Certified Solutions Architect). Experience working in Agile/Scrum environments. Exposure to ETL pipeline development and data-driven backend systems. Understanding of Kubernetes, networking, and cloud security principles.

Posted 5 days ago

Apply

5.0 - 7.0 years

0 Lacs

Ahmedabad

On-site

GlassDoor logo

5 - 7 Years 1 Opening Ahmedabad Role description Must-Have Skills: Java (Spring Boot) AWS (including Systems Manager Parameter Store) Apache Kafka SQL databases NoSQL databases (e.g., ScyllaDB, Cassandra, DynamoDB) Good-to-Have Skills: Experience with JIRA Familiarity with OpenSearch or Elasticsearch Exposure to GitLab CI/CD pipelines and automated deployments Experience Required: Proven experience in building production-grade RESTful APIs Strong understanding of: HTTP protocols, status codes, JSON Request/response lifecycles OOP, design patterns, REST best practices Error handling strategies Solid grasp of unit testing , integration testing , and mocking frameworks (e.g., JUnit, Mockito) Key Responsibilities: Design, develop, and maintain scalable microservices and APIs Work with parameterized configurations using AWS SSM Collaborate across backend, frontend, and DevOps teams Participate in code reviews , CI/CD workflows , and automated deployments Ensure high code quality through testing and documentation Mandatory Soft Skills: Excellent written and verbal communication Strong sense of ownership and ability to work independently Proactive in identifying blockers and suggesting solutions Comfortable in a fast-paced, asynchronous work environment Effective collaboration across cross-functional teams Skills Restful Apis,Http,Jason About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 5 days ago

Apply

0 years

4 - 8 Lacs

Noida

On-site

GlassDoor logo

Job Description Job ID LEADS012900 Employment Type Regular Work Style hybrid Location Noida,UP,India Travel Up to 25% Role Lead Software Engineer Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we’re only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieve? Read on. At UKG, you get more than just a job. You get to work with purpose. Our team of U Krewers are on a mission to inspire every organization to become a great place to work through our award-winning HR technology built for all. Here, we know that you’re more than your work. That’s why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you’re passionate about our purpose — people —then we can’t wait to support whatever gives you purpose. We’re united by purpose, inspired by you. About the role: UKG is looking for a highly skilled Lead Software Engineer who will take a leadership role in our engineering team. As our primary DevOps expert, you will be responsible for designing and implementing scalable, reliable, and secure infrastructure solutions, as well as leading and mentoring other members of the DevOps team. Your extensive knowledge of Terraform and Ansible, coupled with your expertise in cloud technologies, will be instrumental in shaping our infrastructure and deployment strategies. Duties and Responsibilities: Lead the design, implementation, and management of our cloud infrastructure using Terraform, ensuring best practices for scalability, resiliency, and security Develop and maintain highly efficient Ansible playbooks and establish configuration management best practices Provide technical leadership and mentorship to the DevOps team, fostering a culture of continuous learning and improvement Collaborate with cross-functional teams to ensure the smooth integration and deployment of applications Optimize infrastructure performance, monitoring ongoing operations, and implementing proactive solutions for issues and bottlenecks Establish and maintain CI/CD pipelines, designing and implementing automated testing and release processes Evaluate and recommend new tools and technologies to enhance the DevOps workflow and improve efficiency Act as a subject matter expert for DevOps practices, staying up to date with the latest industry trends and best practices About you: Basic Qualifications: Proven experience as a Lead DevOps Engineer, leading the design and implementation of complex and scalable infrastructures Extensive expertise in Terraform and Ansible, with a deep understanding of their capabilities and best practices Strong knowledge of cloud platforms (AWS, Azure, or GCP) and proficiency in designing and managing cloud resources Excellent scripting and automation skills using languages like Python, Bash, or PowerShell Proven ability to lead and mentor a team, fostering a collaborative and high-performance culture. Ability to troubleshoot and resolve complex infrastructure issues in production environments Preferred Qualifications: In-depth knowledge of containerization technologies such as Docker and orchestration tools like Kubernetes. Solid understanding of networking concepts, security principles, and infrastructure hardening practices. Experience with CI/CD tools like GitHub Actions, Jenkins, GitLab CI/CD, or CircleCI Preferred certifications in Terraform and/or Google Cloud Platform (GCP) Exceptional problem-solving and communication skills, with the ability to effectively collaborate with cross-functional teams Bachelor's or master's degree in computer science, engineering, or a related field Where we’re going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it’s our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! UKG is proud to be an equal opportunity employer and is committed to promoting diversity and inclusion in the workplace, including the recruitment process. Disability Accommodation in the Application and Interview Process For individuals with disabilities that need additional assistance at any point in the application and interview process, please email UKGCareers@ukg.com

Posted 5 days ago

Apply

0 years

6 - 10 Lacs

Noida

On-site

GlassDoor logo

Noida,Uttar Pradesh,India +4 more Job ID 768552 Join our Team About this opportunity: At Ericsson, we’re passionate about building software that solves problems. We count on our DevOps Engineers to drive excellence and reliability into our products and services. As a leader in DevOps, you will lead a team focused on delivering automation and DevOps solutions and implementations in customer NW for 4G and 5G product portfolio for different Business Cases. What you will do: Gather and analyze metrics from both operating systems and applications to assist in performance tuning and fault finding. Partner with development teams to improve services through rigorous testing and release procedures. Participate in system design consulting, platform management. Create sustainable systems and services through automation and uplifts. Balance feature development speed and reliability with well-defined service level objectives Rollout automation in customer environment Good in troubleshooting during the Live Node Activities Required Qualification: Git (Must have) GitLab (Preferred) Azure DevOps GitHub Ansible (Must have) Puppet/Chef, Cloudflare (Access), SonarQube (SAST) Docker (Preferred) Artifactory, OSs and Enterprise Environments, Ubuntu 20.04/22.04-RHEL 7/8 (Must have) Kubernetes, Python (Must have) Bash Scripting (Must have) json/yaml.! Project management and methodologies Scrum, Agile and DevOps ways of working (Preferred) Experience with public cloud platforms i.e. Azure, AWS, GCP Expertise in DevOps pipeline orchestration and automation Experience with common software development tools, such as- Source Code Management, Configuration Management, Security and Compliance, Registries and Artifact Repos, OSs and Enterprise Environments, Desired Programming languages, Monitoring Tools. You will bring: Work with the development teams to design scalable, robust systems using cloud native architectural principles. Build automation using industry standard tools to deploy hundreds of different services. Outstanding collaboration, communication, and innovation skills! Develop, manage and maintain automation assets of the Service Delivery Global DevOps Platform Develop automation to manage platform infrastructure, applications, and services. Continuous improvement of Platform reliability, quality, uptime, and Customer Satisfaction Provide primary operational support, lifecycle management, and engineering Platform systems and services. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build never seen before solutions to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 5 days ago

Apply

4.0 years

6 - 9 Lacs

Noida

On-site

GlassDoor logo

NeoXam ( NeoXam Company Profile) is a leading financial software company delivering cutting-edge solutions for data management, portfolio management, and regulatory compliance. With a strong global presence, NeoXam serves over 150 customers in 25 countries, processing more than €25 trillion worth of assets daily and supporting over 10,000 users. Committed to client success, NeoXam provides reliable and scalable solutions that help buy- and sell-side players navigate the evolving financial landscape. Backed by 800+ employees, NeoXam is headquartered in Paris with 20 offices worldwide. About the Role: We are seeking a highly motivated DevOps Engineer to join our team and play a pivotal role in building and maintaining our cloud infrastructure. The ideal candidate will have a strong understanding of DevOps principles and practices, with a focus on AWS, Kubernetes, CI/CD pipelines, Docker, and Terraform. Responsibilities : DevOps & Java Backend Engineer Hands-on experience in developing backend APIs using Java (Spring Boot) and Django, microservices, and deploying secure, production-grade systems. Cloud Platforms: Design, build, and maintain our cloud infrastructure primarily on AWS. Infrastructure as Code (IaC): Develop and manage IaC solutions using tools like Terraform to provision and configure cloud resources on AWS.' Containerization: Implement and manage Docker containers and Kubernetes clusters for efficient application deployment and scaling. CI/CD Pipelines: Develop and maintain automated CI/CD pipelines using tools like Jenkins, Bitbucket CI/CD, or ArgoCD to streamline software delivery. Automation: Automate infrastructure provisioning, configuration management, and application deployment using tools like Terraform and Ansible. Monitoring and Troubleshooting: Implement robust monitoring and alerting systems to proactively identify and resolve issues. Collaboration: Work closely with development teams to understand their needs and provide solutions that align with business objectives. Security: Ensure compliance with security best practices and implement measures to protect our infrastructure and applications. Technical Skills : Programming Languages: Java, Python, JavaScript, Shell Scripting Frameworks & Tools: Django, Spring Boot, REST APIs, Git, Jenkins Cloud & DevOps: Oracle Cloud Infrastructure (OCI), AWS, Terraform, Kubernetes, Docker, Ansible Databases: MySQL, PostgreSQL, Oracle DB, NoSQL (MongoDB) Monitoring & Logging: Grafana, Prometheus, EFK Stack Version Control & CI/CD: Git, GitHub, GitLab, Bitbucket, Jenkins, CI/CD Pipelines Certifications Qualifications : Bachelor’s degree in computer science, Engineering, or a related field. 4+ years of experience in DevOps or a similar role. Strong proficiency in AWS services (EC2, S3, VPC, IAM, etc.). Experience with Kubernetes and container orchestration. Expertise in CI/CD pipelines and tools (Jenkins, Bitbucket CI/CD, ArgoCD). Familiarity with Docker and containerization concepts. Experience with configuration management tools (Terraform, Cloudformation). Scripting skills (Python, Bash). Understanding of networking and security concepts. Bonus Points: Experience with serverless computing platforms (AWS Lambda, AWS Fargate). Knowledge of infrastructure as code (IaC) principles. Experience in maintaining SaaS project. Certifications in AWS, Kubernetes, or DevOps. Why Join Us: Opportunity to work on cutting-edge technologies and projects. Collaborative and supportive team environment. Competitive compensation and benefits package. Opportunities for professional growth and development. If you are a passionate DevOps engineer looking to make a significant impact, we encourage you to apply.

Posted 5 days ago

Apply

5.0 years

0 Lacs

Jaipur

On-site

GlassDoor logo

ABOUT HAKKODA Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are seeking a skilled and collaborative Sr. Data/Python Engineer with experience in the development of production Python-based applications (Such as Django, Flask, FastAPI on AWS) to support our data platform initiatives and application development. This role will initially focus on building and optimizing Streamlit application development frameworks, CI/CD Pipelines, ensuring code reliability through automated testing with Pytest , and enabling team members to deliver updates via CI/CD pipelines . Once the deployment framework is implemented, the Sr Engineer will own and drive data transformation pipelines in dbt and implement a data quality framework. Key Responsibilities: Lead application testing and productionalization of applications built on top of Snowflake - This includes implementation and execution of unit testing and integration testing - Automated test suites include use of Pytest and Streamlit App Tests to ensure code quality, data accuracy, and system reliability. Development and Integration of CI/CD pipelines (e.g., GitHub Actions, Azure DevOps, or GitLab CI) for consistent deployments across dev, staging, and production environments. Development and testing of AWS-based pipelines - AWS Glue, Airflow (MWAA), S3. Design, develop, and optimize data models and transformation pipelines in Snowflake using SQL and Python. Build Streamlit-based applications to enable internal stakeholders to explore and interact with data and models. Collaborate with team members and application developers to align requirements and ensure secure, scalable solutions. Monitor data pipelines and application performance, optimizing for speed, cost, and user experience. Create end-user technical documentation and contribute to knowledge sharing across engineering and analytics teams. Work in CST hours and collaborate with onshore and offshore teams. Qualifications, Skills & Experience 5+ years of experience in Data Engineering or Python based application development on AWS (Flask, Django, FastAPI, Streamlit) - Experience building data data-intensive applications on python as well as data pipelines on AWS in a must. Bachelor’s degree in computer science, Information Systems, Data Engineering, or a related field (or equivalent experience). Proficient in SQL and Python for data manipulation and automation tasks. Experience with developing and productionalizing applications built on Python based Frameworks such as FastAPI, Django, Flask. Experience with application frameworks such as Streamlit, Angular, React etc for rapid data app deployment. Solid understanding of software testing principles and experience using Pytest or similar Python frameworks. Experience configuring and maintaining CI/CD pipelines for automated testing and deployment. Familiarity with version control systems such as Gitlab . Knowledge of data governance, security best practices, and role-based access control (RBAC) in Snowflake. Preferred Qualifications: Experience with dbt (data build tool) for transformation modeling. Knowledge of Snowflake’s advanced features (e.g., masking policies, external functions, Snowpark). Exposure to cloud platforms (e.g., AWS, Azure, GCP). Strong communication and documentation skills. Benefits: Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? \uD83D\uDE80 \uD83D\uDCBB Apply today\uD83D\uDC47 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here.

Posted 5 days ago

Apply

10.0 years

3 - 7 Lacs

Jaipur

On-site

GlassDoor logo

Location Gurugram, Jaipur Employment Type Full time Location Type Hybrid Department Engineering For over four decades, PAR Technology Corporation (NYSE: PAR) has been a leader in restaurant technology, empowering brands worldwide to create lasting connections with their guests. Our innovative solutions and commitment to excellence provide comprehensive software and hardware that enable seamless experiences and drive growth for over 100,000 restaurants in more than 110 countries. Embracing our "Better Together" ethos, we offer Unified Customer Experience solutions, combining point-of-sale, digital ordering, loyalty and back-office software solutions as well as industry-leading hardware and drive-thru offerings. To learn more, visit partech.com or connect with us on LinkedIn, X (formerly Twitter), Facebook, and Instagram. Position Description: We are seeking a highly skilled and experienced Quality Engineering (QE) Manager to lead our QE team. In this role, you will be responsible for defining and driving the quality strategy, managing the quality engineering team, and ensuring the delivery of high-quality products. You will work closely with Product, Development, DevOps, and other cross-functional teams to implement best practices in test automation, performance testing, and continuous quality improvement. Position Location: Gurugram / Jaipur What We’re Looking For: Lead and mentor a team of quality engineers (manual and automation). Define and drive the overall test strategy and quality metrics for projects. Ensure the integration of quality engineering into the entire software development lifecycle (SDLC). Collaborate with Product and Development teams to define test requirements, acceptance criteria, and automation coverage. Oversee the creation and maintenance of test automation frameworks (UI, API, performance). Implement and monitor continuous integration (CI) and continuous deployment (CD) practices for test automation. Identify and mitigate quality risks early in the development process. Track and report on key quality metrics, such as defect leakage, test coverage, and release quality. Drive root cause analysis and continuous improvement initiatives. Stay up to date with industry trends and emerging technologies in quality engineering. Unleash your potential: What you will be doing and owning: 10+ years of experience in software quality engineering, with at least 3+ years in a leadership or managerial role. Proven experience in designing and implementing automation frameworks (Selenium, Cypress, Appium, REST Assured, etc.). Strong understanding of Agile methodologies and DevOps practices. Experience with CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps). Familiarity with performance testing tools (e.g., JMeter, LoadRunner) is a plus. Excellent communication, leadership, and stakeholder management skills. Interview Process: Interview #1: Phone Screen with Talent Acquisition Team Interview #2: Video interview with the Technical Teams (via MS Teams/F2F) Interview #3: Video interview with the Hiring Manager (via MS Teams/F2F) PAR is proud to provide equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. We also provide reasonable accommodations to individuals with disabilities in accordance with applicable laws. If you require reasonable accommodation to complete a job application, pre-employment testing, a job interview or to otherwise participate in the hiring process, or for your role at PAR, please contact accommodations@partech.com. If you’d like more information about your EEO rights as an applicant, please visit the US Department of Labor's website.

Posted 5 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

GitLab is an open core software company that develops the most comprehensive AI-powered DevSecOps Platform, used by more than 100,000 organizations. Our mission is to enable everyone to contribute to and co-create the software that powers our world. When everyone can contribute, consumers become contributors, significantly accelerating the rate of human progress. This mission is integral to our culture, influencing how we hire, build products, and lead our industry. We make this possible at GitLab by running our operations on our product and staying aligned with our values. Learn more about Life at GitLab. Thanks to products like Duo Enterprise, and Duo Workflow, customers get the benefit of AI at every stage of the SDLC. The same principles built into our products are reflected in how our team works: we embrace AI as a core productivity multiplier. All team members are encouraged and expected to incorporate AI into their daily workflows to drive efficiency, innovation, and impact across our global organisation. The Engineering Manager for Composition Analysis and Dynamic Analysis specializes in leading teams focused on application security scanning technologies. This role oversees multiple security-focused engineering groups and is responsible for balancing priorities across these specialized teams. This role is an extension of the Engineering Manager position. Groups Overview Composition Analysis -The Composition Analysis group is responsible for: Software Composition Analysis Container Scanning Dynamic Analysis - The Dynamic Analysis group is responsible for: API Security Dynamic Analysis Security Testing (DAST) Fuzz Testing What You’ll Do Manage engineers across both the Composition Analysis and Dynamic Analysis groups Drive key initiatives including: Auto-remediation of vulnerable software packages Scanning of unmanaged dependencies in C/C++ Static reachability analysis with function-level granularity Snippet detection for open source dependencies Improve the DAST crawler for efficiency, stability, and consistent web application traversal Balance priorities across multiple security-focused engineering teams Author project plans for epics across both groups, ensuring alignment and avoiding duplication of effort Run agile project management processes for multiple teams Provide guidance on security product architecture Coordinate between Composition Analysis and Dynamic Analysis teams to ensure consistent and complementary approaches to application security What You’ll Bring In-depth understanding of application security concepts, particularly in software composition analysis techniques to evaluate the security risks associated with application dependencies and dynamic analysis security testing (DAST) tools. Understanding of the challenges in developing and maintaining security scanning tools Experience managing multiple technical teams simultaneously Familiarity with containerization technologies and dependency management systems Knowledge of web application security testing techniques and tools Experience with open source security tooling (such as OWASP ZAP, Trivy, or similar) Experience in DevSecOps practices and implementation Experience in vulnerability management and remediation How GitLab Will Support You Benefits to support your health, finances, and well-being All remote, asynchronous work environment Flexible Paid Time Off Team Member Resource Groups Equity Compensation & Employee Stock Purchase Plan Growth and Development Fund Parental leave Home office support Please note that we welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement. Additionally, studies have shown that people from underrepresented groups are less likely to apply to a job unless they meet every single qualification. If you're excited about this role, please apply and allow our recruiters to assess your application. The base salary range for this role’s listed level is currently for residents of listed locations only. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, and alignment with market data. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary. California/Colorado/Hawaii/New Jersey/New York/Washington/DC/Illinois/Minnesota pay range $131,600—$282,000 USD Country Hiring Guidelines: GitLab hires new team members in countries around the world. All of our roles are remote, however some roles may carry specific location-based eligibility requirements. Our Talent Acquisition team can help answer any questions about location after starting the recruiting process. Privacy Policy: Please review our Recruitment Privacy Policy. Your privacy is important to us. GitLab is proud to be an equal opportunity workplace and is an affirmative action employer. GitLab’s policies and practices relating to recruitment, employment, career development and advancement, promotion, and retirement are based solely on merit, regardless of race, color, religion, ancestry, sex (including pregnancy, lactation, sexual orientation, gender identity, or gender expression), national origin, age, citizenship, marital status, mental or physical disability, genetic information (including family medical history), discharge status from the military, protected veteran status (which includes disabled veterans, recently separated veterans, active duty wartime or campaign badge veterans, and Armed Forces service medal veterans), or any other basis protected by law. GitLab will not tolerate discrimination or harassment based on any of these characteristics. See also GitLab’s EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know during the recruiting process. Show more Show less

Posted 5 days ago

Apply

8.0 - 10.0 years

0 Lacs

Andhra Pradesh, India

On-site

Linkedin logo

Qualifications And Education 8-10 years experience in building backend services using Java. UI experience desired with AngularJS and/or React frameworks. Good knowledge on cloud platforms Google Cloud Bachelors degree in computer science or engineering discipline Strong coding and debugging skills with experience in backend technologies. Strong problem-solving skills and attention to detail. Exposure in building micro services applications architecture and with hands on experience in design and development/ coding. Knowledge with development tools such as IntelliJ, Visual Studio Code, GitLab, BitBucket. Strong inter-personal communication and collaboration skills. The candidate must be able to work independently & must possess strong communication skills. A good team player, high level of personal commitment & 'can do' attitude. Demonstrable ownership, commitment and willingness to learn new technologies and frameworks. Show more Show less

Posted 5 days ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

HackerOne is a global leader in offensive security solutions. Our HackerOne Platform combines AI with the ingenuity of the largest community of security researchers to find and fix security, privacy, and AI vulnerabilities across the software development lifecycle. The platform offers bug bounty, vulnerability disclosure, pentesting, AI red teaming, and code security. We are trusted by industry leaders like Amazon, Anthropic, Crypto.com, General Motors, GitHub, Goldman Sachs, Uber, and the U.S. Department of Defense. HackerOne was named a Best Workplace for Innovators by Fast Company in 2023 and a Most Loved Workplace for Young Professionals in 2024. HackerOne Values HackerOne is dedicated to fostering a strong and inclusive culture. HackerOne is Customer Obsessed and prioritizes customer outcomes in our decisions and actions. We Default to Disclosure by operating with transparency and integrity, ensuring trust and accountability. Employees, researchers, customers, and partners Win Together by fostering empowerment, inclusion, respect, and accountability. Senior Analytics Engineer, DataOne Location: Pune, India This role requires the candidate to be based in Pune and work from an office 4 or 5 days a week. Please only apply if you're okay with these requirements. *** Position Summary HackerOne is seeking a Senior Analytics Engineer to join our DataOne team. You will lead the discovery, architecture, and development of high-impact, high-performance, scalable source of truth data marts and data products. Joining our growing, distributed organization, you'll be instrumental in building the foundation that powers HackerOne's one source of truth. As a Senior Analytics Engineer, you'll be able to lead challenging projects and foster collaboration across the company. Leveraging your extensive technological expertise, domain knowledge, and dedication to business objectives, you'll drive innovation to propel HackerOne forward. DataOne democratizes source-of-truth information and insights to enable all Hackeronies to ask the right questions, tell cohesive stories, and make rigorous decisions so that HackerOne can delight our Customers and empower the world to build a safer internet . The future is one where every Hackeronie is a catalyst for positive change , driving data-informed innovation while fostering our culture of transparency, collaboration, integrity, excellence, and respect for all . What You Will Do Your first 30 days will focus on getting to know HackerOne. You will join your new squad and begin onboarding - learn our technology stack (Python, Airflow, Snowflake, DBT, Meltano, Fivetran, Looker, AWS), and meet our Hackeronies. Within 60 days, you will deliver impact on a company level with consistent contribution to high-impact, high-performance, scalable source of truth data marts and data products. Within 90 days, you will drive the continuous evolution and innovation of data at HackerOne, identifying and leading new initiatives. Additionally, you foster cross-departmental collaboration to enhance these efforts. Deliver impact by developing the roadmap for continuously and iteratively launching high-impact, high-performance, scalable source of truth data marts and data products, and by leading and delivering cross-functional product and technical initiatives. Be a technical paragon and cross-functional force multiplier, autonomously determining where to apply focus, contributing at all levels, elevating your squad, and designing solutions to ambiguous business challenges, in a fast-paced early-stage environment. Drive continuous evolution and innovation, the adoption of emerging technologies, and the implementation of industry best practices. Champion a higher bar for discoverability, usability, reliability, timeliness, consistency, validity, uniqueness, simplicity, completeness, integrity, security, and compliance of information and insights across the company. Provide technical leadership and mentorship, fostering a culture of continuous learning and growth. Minimum Qualifications 6+ years experience as an Analytics Engineer, Business Intelligence Engineer, Data Engineer, or similar role w/ proven track record of launching source of truth data marts. 6+ years of experience building and optimizing data pipelines, products, and solutions. Must be flexible to align with ocassional evening meetings in USA timezone. Extensive experience working with various data technologies and tools such as Airflow, Snowflake, Meltano, Fivetran, DBT, and AWS. Expert in SQL for data manipulation in a fast-paced work environment. Expert in creating compelling data stories using data visualization tools such as Looker, Tableau, Sigma, Domo, or PowerBI. Proven track record of having substantial impact across the company, as well as externally for the company, demonstrating your ability to drive positive change and achieve significant results. English fluency, excellent communication skills, and can present data-driven narratives in verbal, presentation, and written formats. Passion for working backwards from the Customer and empathy for business stakeholders. Experience shaping the strategic vision for data. Experience working with Agile and iterative development processes. Preferred Qualifications Strong proficiency in at least one data programming language such as Python or R. Experience working within and with data from business applications such as Salesforce, Clari, Gainsight, Workday, GitLab, Slack, or Freshservice. Proven track record of driving innovation, adopting emerging technologies and implementing industry best practices. Thrive on solving for ambiguous problem statements in an early-stage environment. Experience designing advanced data visualizations and data-rich interfaces in Figma or equivalent. Compensation Bands: Pune, India ₹3.7M – ₹4.6M Offers Equity Job Benefits: Health (medical, vision, dental), life, and disability insurance* Equity stock options Retirement plans Paid public holidays and unlimited PTO Paid maternity and parental leave Leaves of absence (including caregiver leave and leave under CO's Healthy Families and Workplaces Act) Employee Assistance Program Flexible Work Stipend Eligibility may differ by country We're committed to building a global team! For certain roles outside the United States, U.K., and the Netherlands, we partner with Remote.com as our Employer of Record (EOR). Visa/work permit sponsorship is not available. Employment at HackerOne is contingent on a background check. HackerOne is an Equal Opportunity Employer in the terms and conditions of employment for all employees and job applicants without regard to race, color, religion, sex, sexual orientation, age, gender identity or gender expression, national origin, pregnancy, disability or veteran status, or any other protected characteristic as outlined by international, federal, state, or local laws. This policy applies to all HackerOne employment practices, including hiring, recruiting, promotion, termination, layoff, recall, leave of absence, compensation, benefits, training, and apprenticeship. HackerOne makes hiring decisions based solely on qualifications, merit, and business needs at the time. For US based roles only: Pursuant to the San Francisco Fair Chance Ordinance, all qualified applicants with arrest and conviction records will be considered for the position. Show more Show less

Posted 5 days ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Description RESPONSIBILITIES Design, develop, implement, test, and maintain automated test suites and frameworks for AI/ML pipelines. Collaborate closely with ML engineers and data scientists to understand model architectures and data workflows. Develop and execute test plans, test cases, and test scripts to identify software defects in AI/ML applications. Ensure end-to-end quality of AI/ML solutions, including data integrity, model performance, and system integration. Implement continuous integration and continuous deployment (CI/CD) processes for ML pipelines. Conduct performance and scalability testing for AI/ML systems. Document and track software defects using bug-tracking systems, and report issues to development teams. Participate in code reviews and provide feedback on testability and quality. Help foster a culture of quality and continuous improvement within the ML engineering group. Stay updated with the latest trends and best practices in AI/ML testing and quality assurance. Must Haves: Bachelor’s degree in Computer Science, Engineering, or a related field. 2+ years of experience in quality assurance, specifically testing AI/ML applications. Experience with the following: Strong programming skills in Python (experience with libraries like PyTest or unittest). Familiarity with machine learning frameworks (TensorFlow, PyTorch, or scikit-learn). Experience with test automation tools and frameworks. Knowledge of CI/CD tools (Jenkins, GitLab CI, or similar). Experience with containerization technologies like Docker and orchestration systems like Kubernetes. Proficient in Linux operating systems. Familiarity with version control systems like Git. Strong understanding of software testing methodologies and best practices. Excellent analytical and problem-solving skills. Excellent communication and collaboration skills. Bonus Attributes: Experience with testing data pipelines and ETL processes. Cloud platform experience; GCP, AWS or Azure are acceptable. Knowledge of big data technologies like Apache Spark, Kafka, or Airflow. Experience with performance testing tools. Understanding of data science concepts and statistical analysis. Certifications in software testing or cloud technologies. Abilities: Ability to work with a high level of initiative, accuracy, and attention to detail. Ability to prioritize multiple assignments effectively. Ability to meet established deadlines. Ability to successfully, efficiently, and professionally interact with staff and customers. Excellent organization skills. Critical thinking ability ranging from moderately to highly complex. Flexibility in meeting the business needs of the customer and the company. Ability to work creatively and independently with latitude and minimal supervision. Ability to utilize experience and judgment in accomplishing assigned goals. Experience in navigating organizational structure. Show more Show less

Posted 5 days ago

Apply

7.0 - 8.0 years

11 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 8+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 5 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

5+ years experience in Software Engineering and 3+ years of experience with cloud-native architectures 2+ years implementing secure and compliant solutions for highly regulated environments 3+ years of experience with a container orchestration platform like Kubernetes, EKS, ECS, AKS or equivalent 2+ years of production system administration and infrastructure operations experience Excellence in Container architecture, design, ecosystem and/or development Experience with container-based CI/CD tools such as ArgoCD, Helm, CodeFresh, GitHub Actions, GitLab or equivalent Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description There’s nothing more exciting than being at the center of a rapidly growing field in technology and applying your skillsets to drive innovation and modernize the world's most complex and mission-critical systems. As a Site Reliability Engineer III at JPMorgan Chase within the Infrastructure Platform team, you will solve complex and broad business problems with simple and straightforward solutions. Through code and cloud infrastructure, you will configure, maintain, monitor, and optimize applications and their associated infrastructure to independently decompose and iteratively improve on existing solutions. You are a significant contributor to your team by sharing your knowledge of end-to-end operations, availability, reliability, and scalability of your application or platform. Job Responsibilities Guides and assists others in the areas of building appropriate level designs and gaining consensus from peers where appropriate Collaborates with other software engineers and teams to design and implement deployment approaches using automated continuous integration and continuous delivery pipelines Collaborates with other software engineers and teams to design, develop, test, and implement availability, reliability, scalability, and solutions in their applications Collaborates with technical experts, key stakeholders, and team members to resolve complex problems Understands service level indicators and utilizes service level objectives to proactively resolve issues before they impact customers Demonstrated strong analytical skills to diagnose and resolve complex technical issues. Ability to perform root cause analysis and implement preventive measures. Experience in managing incidents and coordinating response efforts Has the ability to drive initiatives for process and system improvements. Supports the adoption of site reliability engineering best practices within your team Should complete SRE Bar Raiser Program Required Qualifications, Capabilities, And Skills Formal training or certification as Site Reliability Engineer in an enterprise infrastructure environment and 3+ years applied experience Proficient in site reliability culture and principles and familiarity with how to implement site reliability within an application or platform Proficient in at least one programming language such as Python, Java/Spring Boot, and .Net Proficient knowledge of software applications and technical processes within a given technical discipline (e.g., Cloud, artificial intelligence, Android, etc.) Experience in observability such as white and black box monitoring, service level objective alerting, and telemetry collection using tools such as Grafana, Dynatrace, Prometheus, Datadog, Splunk, and others Experience with continuous integration and continuous delivery tools like Jenkins, GitLab, or Terraform Familiarity with container and container orchestration such as ECS, Kubernetes, and Docker Familiarity with CI/CD pipelines and tools like Jenkins, GitLab CI, or CircleCI. Proficiency in scripting languages like Python. Experience with cloud platforms like AWS, Google Cloud, or Azure Understanding of infrastructure as code (IaC) using tools like Terraform or Ansible. Preferred Qualifications, Capabilities, And Skills Strong communication skills to collaborate with cross-functional teams. Skills in planning for future growth and scalability of systems Experience with Data Protection solutions such as Cohesity or Commvault About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Title: Senior Data Engineer – Data Quality, Ingestion & API Development Mandatory skill set - Python, Pyspark, AWS, Glue , Lambda, CI CD Total experience - 8+ Relevant experience - 8+ Work Location - Trivandrum /Kochi Candidates from Kerala and Tamil Nadu prefer more who are ready to relocate to above work locations. Candidates must be having an experience in lead role related to Data Engineer Job Overview We are seeking an experienced Senior Data Engineer to lead the development of a scalable data ingestion framework while ensuring high data quality and validation. The successful candidate will also be responsible for designing and implementing robust APIs for seamless data integration. This role is ideal for someone with deep expertise in building and managing big data pipelines using modern AWS-based technologies, and who is passionate about driving quality and efficiency in data processing systems. Key Responsibilities • Data Ingestion Framework: o Design & Development: Architect, develop, and maintain an end-to-end data ingestion framework that efficiently extracts, transforms, and loads data from diverse sources. o Framework Optimization: Use AWS services such as AWS Glue, Lambda, EMR, ECS , EC2 and Step Functions to build highly scalable, resilient, and automated data pipelines. • Data Quality & Validation: o Validation Processes: Develop and implement automated data quality checks, validation routines, and error-handling mechanisms to ensure the accuracy and integrity of incoming data. o Monitoring & Reporting: Establish comprehensive monitoring, logging, and alerting systems to proactively identify and resolve data quality issues. • API Development: o Design & Implementation: Architect and develop secure, high-performance APIs to enable seamless integration of data services with external applications and internal systems. o Documentation & Best Practices: Create thorough API documentation and establish standards for API security, versioning, and performance optimization. • Collaboration & Agile Practices: o Cross-Functional Communication: Work closely with business stakeholders, data scientists, and operations teams to understand requirements and translate them into technical solutions. o Agile Development: Participate in sprint planning, code reviews, and agile ceremonies, while contributing to continuous improvement initiatives and CI/CD pipeline development (using tools like GitLab). Required Qualifications • Experience & Technical Skills: o Professional Background: At least 5 years of relevant experience in data engineering with a strong emphasis on analytical platform development. o Programming Skills: Proficiency in Python and/or PySpark, SQL for developing ETL processes and handling large-scale data manipulation. o AWS Expertise: Extensive experience using AWS services including AWS Glue, Lambda, Step Functions, and S3 to build and manage data ingestion frameworks. o Data Platforms: Familiarity with big data systems (e.g., AWS EMR, Apache Spark, Apache Iceberg) and databases like DynamoDB, Aurora, Postgres, or Redshift. o API Development: Proven experience in designing and implementing RESTful APIs and integrating them with external and internal systems. o CI/CD & Agile: Hands-on experience with CI/CD pipelines (preferably with GitLab) and Agile development methodologies. • Soft Skills: o Strong problem-solving abilities and attention to detail. o Excellent communication and interpersonal skills with the ability to work independently and collaboratively. o Capacity to quickly learn and adapt to new technologies and evolving business requirements. Preferred Qualifications • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field. • Experience with additional AWS services such as Kinesis, Firehose, and SQS. • Familiarity with data lakehouse architectures and modern data quality frameworks. • Prior experience in a role that required proactive data quality management and API- driven integrations in complex, multi-cluster environments. Candidate those who are Interested please drop your resume to: gigin.raj@greenbayit.com MOB NO - 8943011666 Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Kerala, India

On-site

Linkedin logo

🚀 We’re Hiring: Senior Data Engineer | Immediate Joiner 📍 Location: Kochi / Trivandrum | 💼 Experience: 10+ Years 🌙 Shift: US Overlapping Hours (till 10 PM IST) We are looking for a Senior Data Engineer / Associate Architect who thrives on solving complex data problems and leading scalable data infrastructure development. Must-Have Skillset: ✅ Python, PySpark ✅ AWS Glue, Lambda, Step Functions ✅ CI/CD (GitLab), API Development ✅ 5+ years hands-on AWS expertise ✅ Strong understanding of Data Quality, Validation & Monitoring Role Highlights: 🔹 Build & optimize AWS-based data ingestion frameworks 🔹 Implement high-performance APIs 🔹 Drive data quality & integrity 🔹 Collaborate across teams in Agile environments Nice to Have: ➕ Experience with Kinesis, Firehose, SQS ➕ Familiarity with Lakehouse architectures Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role Description Strong experience in design, installation, configuration, and troubleshooting of CI/CD pipelines and tools like Jenkins, GitLab CI, Bamboo, Ansible, Puppet, Chef, Docker, Kubernetes. Proficient scripting skills in Python, Shell, PowerShell, Groovy, Perl. Experience with Infrastructure Automation tools (Ansible, Puppet, Chef, Terraform). Experience managing repositories and migration automation (Git, BitBucket, GitHub, Clearcase). Experience with build automation tools (Maven, Ant). Artefact repository management experience (Nexus, Artifactory). Knowledge of cloud infrastructure configuration and migration (AWS, Azure, Google Cloud). Experience integrating CI/CD pipelines with code quality and test automation tools (SonarQube, Selenium, JUnit, NUnit). Skilled in containerization technologies (Docker, Kubernetes). Skills AWS,DevOps,CI/CD,Terraform Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description Position at Wind River Fullstack - MTS Why Choose Wind River? In a world increasingly driven by software innovation, Wind River is pioneering the technologies to accelerate the digital transformation of our customers with a new generation of Mission Critical AI Systems in an AI-first world with the most exacting standards for safety, security, performance, and reliability. Success will be determined by our ability to innovate with velocity and sell at the solutions level. Wind River’s impact spans critical infrastructure domains such as telecommunications, including 5G; industrial (automation, sustainable energy, robotics, mining), connected healthcare and medical devices, automotive (connected and self-driving vehicles), and aerospace & defense. We were recognized by VDC Research in July 2020 as #1 in Edge Compute OS Platforms, overtaking Microsoft as the overall commercial leader. Wind River regularly wins industry recognitions for excellence in IoT security, cloud and edge computing, as well as 8 consecutive years as a “Top Work Place”. If you’re passionate about amplifying your impact on the world, in a caring, respectful culture with a growth mindset, come join us and help lead the way into the future of the intelligent edge! About The Opportunity Wind River Systems is seeking an experienced high-performing DevSecOps software engineer for a position supporting a cloud-based application development team. The successful candidate will join a highly skilled development team delivering internal and external tools and technologies across a complete continuous testing platform providing support for test automation, pioneering many new industry leading capabilities. The successful candidate must have experience in cloud-native software development and be a highly adaptable team player who can quickly ramp up on new technologies and accomplish goals in a fast-paced agile environment. A combination of strong technical and communication skills is a must. Skills ABOUT YOU BSEE/BSCS or equivalent experience Strong knowledge of microservices architecture, design principles, and patterns. Solid experience in full stack development, including both frontend and backend technologies. Expertise in designing and developing RESTful APIs and integrating external services Proficiency in programming languages such as Python or Node.JS Strong experience with SQL, Database design, and DB migrations Strong experience with Git workflows Experience with frontend frameworks like Angular, Javascript and Typescript Strong Knowledge of CI/CD pipelines and related tools (e.g., Jenkins, GitLab CI/CD). Experience with Docker and Kubernetes Experience with cloud platforms such AWS, Google Cloud and Azure Benefits Workplace Flexibility: Hybrid Work. Medical insurance: Group Medical Insurance coverage. Additional shared cost medical benefit in the form of reimbursements. Employee Assistance Program. Vacation and Time off: Employees are eligible for various types of paid time off. Additional Time off’s – Birthday, Volunteer Time off, Wedding. Wellness Benefits through Unmind Carrot (Family -forming support) Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Overview Reporting to the Manager of Software Engineering, this position is a member of a small software group in Ametek’s India office. This group in the Indian office is part of a larger software team that includes software engineer(s) in NJ. This software team is responsible for the design, development and support of leading-edge software products that support our world-class Phantom Cameras. Primarily, this is a Graphical User Interface (GUI) product and a Software Development Kit (SDK) that controls and communicates with our cameras, downloads and views one or multiple cines (video files), and performs image processing, file transfers, file editing, etc. as required. The current GUI is written in C++, QT and QML, the legacy UI is written in C# and the SDK is written in C/C++. Job Responsibilities The Job Responsibilities Include, But Are Not Limited To The primary focus of this position will be in releasing and supporting the SDK using C/C++ and Microsoft Visual Studio and on the various desktop applications and libraries, mainly using Qt and QML. Other duties as assigned. Necessary Skills/Talents B.S. Degree (M.S. preferred) in Computer Science, Electrical Engineering, Computer Engineering or equivalent with 5-10 yrs of experience in software development. Dependable, driven, teachable person with good work ethic and is excited to learn and take on new challenges. Thorough understanding of C/C++ design and programming concepts. Experience with QT and QML Image processing & compression, OpenCL, GPU (CUDA), Windows Sockets, familiarity with Codecs, e.g. H.264, H.265 (Microsoft Media Foundation Encoder), DirectShow API, x264 and x265 codecs. Will be required to work a few hours 2 or 3 evenings a week to coordinate with NJ software team. Flexible, able to change priorities when given new directives for the greater good of the team. Committed to progress and comfortable with the occasional fluidity in hours, to ensure synchronicity between India and US teams. Strong verbal and written communication skills. Experience in troubleshooting, debugging and maintaining existing code. Excellent technical judgment and decision-making skills. Recognizes speed of execution as a competitive advantage for Vision Research and thus makes decisions and takes risks to support the rapid development of products and solutions Desirable Skills Experience with C# Experience on Linux and Mac OS is a Plus Gitlab, Git, CI/CD Vision Research is a Business Unit in the Materials Analysis Division of AMETEK, Inc. Vision Research manufactures industry leading high-speed digital cameras. Our cameras are primarily sold into industrial, academic, defense and government research facilities. We also have a smaller entertainment oriented camera business. Although not our primary focus, Vision Research has received both an Academy Award and an Emmy for our technical contribution to the entertainment industry. To learn more about Vision Research, Phantom cameras and to learn more about our imaging capabilities, please visit www.phantomhighspeed.com. AMETEK, Inc. is a leading global provider of industrial technology solutions serving a diverse set of attractive niche markets with annual sales over $7.0 billion. AMETEK is committed to making a safer, sustainable, and more productive world a reality. We use differentiated technology solutions to solve our customers’ most complex challenges. We employ 21,000 colleagues, in 35 countries, that are grounded by our core values: Ethics and Integrity, Respect for the Individual, Inclusion, Teamwork, and Social Responsibility. AMETEK (NYSE:AME) is a component of the S&P 500. Visit www.ametek.com for more information. Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Title: Quantitative Trading Consultant – Specialist Department: Technology Location: Mumbai (In-office) Budget: Up to ₹5,00,000 per month (₹60 LPA) Experience Required: 10+ Years Notice Period: Open to candidates currently serving notice Urgency: Immediate Requirement Company Type: Large-scale organization with substantial AUM and rapid growth Role Overview We are hiring a highly accomplished Quantitative Trading Consultant with deep expertise in building and running mid-frequency and low-frequency trading desks. This full-time specialist role demands a sharp, independent thinker with proven experience across the entire trading stack—from infrastructure setup to execution, risk, and compliance. You will work in a fast-paced, high-performance environment with direct access to senior leadership, contributing to a firm with strong market presence and sizable assets under management (AUM). Key Responsibilities Infrastructure Setup: Architect and implement scalable trading infrastructure—servers, execution gateways, and broker/exchange connectivity. Market Data Management Build and maintain real-time market data feeds using WebSocket APIs, ensuring minimal latency and robust data reliability. Strategy Development & Backtesting Create and enforce best practices for strategy research, backtesting, forward testing, and real-time deployment. Execution Systems Develop fault-tolerant, low-latency execution engines with embedded risk controls and efficient error handling. Risk Management Design real-time risk monitoring systems, enforce position and exposure limits, and ensure compliance with SEBI/NSE/BSE regulations. Monitoring & Alerting Deploy and maintain monitoring systems using Prometheus, Grafana, and ELK Stack for continuous visibility and alerts. Team Collaboration Liaise with quants, developers, analysts, and DevOps to ensure smooth trading operations and system integration. Compliance & Documentation Maintain detailed documentation of trading systems, workflows, risk controls, and regulatory compliance measures. Required Skills & Qualifications Deep understanding of quantitative trading strategies, financial markets, and market microstructure Proficient in Python, with working knowledge of C++ or Rust for performance-critical components Expertise in real-time data pipelines using Kafka, Redis, and experience with PostgreSQL, MongoDB, or TimescaleDB Familiarity with CI/CD pipelines, GitLab/GitHub, Docker, Kubernetes, and cloud platforms (AWS/GCP) Proven experience in WebSocket API integration and building latency-sensitive systems Strong analytical mindset, risk awareness, and problem-solving skills Sound understanding of Indian market compliance standards Preferred Experience Prior ownership or key contribution to a quant trading desk (mid-frequency or low-frequency) Experience in Indian equity, futures, and options markets Experience with algorithmic trading infrastructure and strategy deployment Reporting This role reports directly to senior management and works closely with the trading, tech, and risk leadership teams. Skills: kubernetes,websocket api,mongodb,quantitative trading strategies,cloud platforms,gcp,gitlab,aws,timescaledb,c++,market microstructure,redis,financial markets,kafka,risk management,ci/cd pipelines,api,indian market compliance standards,real-time data pipelines,backtesting,github,postgresql,python,docker,rust Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

TipsJob Title: Quantitative Trading Consultant – Operations & Trading Systems Location: Mumbai (In-office) Compensation: Up to ₹1,60,000 per month (₹10–20 LPA based on experience) Industry: Operations / Manufacturing / Production / Trading Type: Full-time | On-site Role Overview We are seeking a highly skilled and technically sound Quantitative Trading Consultant to lead the setup and execution of our mid-frequency and low-frequency trading desk. This role requires a deep understanding of trading infrastructure, execution systems, real-time data management, and risk control. You will be responsible for building the trading architecture from the ground up, collaborating with research and tech teams, and ensuring regulatory compliance in Indian financial markets. Key Responsibilities Infrastructure Setup: Design and implement end-to-end trading infrastructure: data servers, execution systems, broker/exchange connectivity. Real-Time Data Handling: Build and maintain real-time market data feeds using WebSocket APIs, ensuring minimal latency and high reliability. Strategy Development Framework: Establish frameworks and tools for backtesting, forward testing, and strategy deployment across multiple asset classes. Execution System Development: Develop low-latency, high-reliability execution code with robust risk and error-handling mechanisms. Risk Management: Design and implement real-time risk control systems, including position sizing, exposure monitoring, and compliance with SEBI/NSE/BSE regulations. Monitoring & Alerting: Set up systems using Prometheus, Grafana, and ELK stack for monitoring, logging, and proactive issue alerts. Team Collaboration: Work closely with quant researchers, DevOps, developers, and analysts to ensure smooth desk operations. Documentation & Compliance: Maintain detailed documentation of all infrastructure, workflows, trading protocols, and risk procedures. Ensure adherence to relevant regulatory guidelines. Required Skills & Qualifications Expert knowledge of quantitative trading, market microstructure, and execution strategy. Strong programming skills in Python, with working knowledge of C++ or Rust for performance-critical modules. Hands-on experience with WebSocket API integration, Kafka, Redis, and PostgreSQL/TimescaleDB/MongoDB. Familiarity with CI/CD tools, GitHub/GitLab, Docker, Kubernetes, and AWS/GCP cloud environments. Sound understanding of risk management frameworks and compliance in Indian markets. Excellent problem-solving and analytical thinking abilities. Strong attention to detail, documentation, and process adherence. Preferred Experience Previous experience in setting up or managing a quantitative trading desk (mid-frequency or low-frequency). Hands-on exposure to Indian equities, futures, and options markets. Experience working in a high-growth, fast-paced trading or hedge fund environment. Reporting Structure This role reports directly to senior management and works cross-functionally with technology, trading, and risk management teams. Why Join Us Opportunity to build and lead the trading infrastructure from the ground up. Work in a high-growth company with a strong focus on innovation and technology. Collaborate with top talent across trading, development, and research. Gain exposure to cutting-edge trading tools and modern cloud-native infrastructure. Skills: quantitative trading,attention to detail,python,c++,risk management,redis,problem-solving,rust,execution strategy,gitlab,kafka,docker,github,analytical thinking,market microstructure,mongodb,postgresql,ci/cd,aws,kubernetes,regulatory compliance,websocket api,gcp,timescaledb,monitoring,api,alerting Show more Show less

Posted 5 days ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the GCP/Azure Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and Azure platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and Azure services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education And Experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills And Behavioral Competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less

Posted 5 days ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Syensqo is all about chemistry. We’re not just referring to chemical reactions here, but also to the magic that occurs when the brightest minds get to work together. This is where our true strength lies. In you. In your future colleagues and in all your differences. And of course, in your ideas to improve lives while preserving our planet’s beauty for the generations to come. Job Overview And Responsibilities This position will be based in Pune, India. As the AWS Cloud Engineer, you will be responsible for designing, implementing and optimizing scalable, resilient cloud infrastructure of Google Cloud and AWS platform. This role involves deploying, automating and maintaining cloud-based applications, services and tools to ensure high availability, security and performance. The ideal candidate will have in-depth knowledge of GCP and AWS services and architecture best practices, along with strong experience in infrastructure automation, monitoring and troubleshooting. We count on you for: Design and implement secure, scalable and highly available cloud infrastructure using GCP/Azure services, based on business and technical requirements Develop automated deployment pipelines using Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation and GCP/Azure CDK, ensuring efficient, repeatable and consistent infrastructure deployments Implement and manage security practices such as Identity and Access Management, network security and encryption to ensure data protection and compliance with industry standards and regulations Design and implement backup, disaster recovery and failover solutions for high availability and business continuity Create and maintain comprehensive documentation of infrastructure architecture, configuration and troubleshooting steps and share knowledge with team members Close collaboration with multi-cloud enterprise architect, DevOps solution architect, Cloud Operations Manager to ensure quick MVP prior to pushing into production Keep up to date with new GCP/Azure services, features and best practices, providing recommendations for process and architecture improvements Education and experience Bachelor's degree in Information Technology, Computer Science, Business Administration, or a related field. Master's degree or relevant certifications would be a plus. Minimum of 2-5 years of experience in cloud engineering, cloud architecture or infrastructure role Proven experience with GCP/Azure services, including EC2, S3, RDS, Lambda, VPC, IAM and CloudFormation Hands-on experience with Infrastructure-as-Code (IaC) tools such as Terraform, GCP/Azure CloudFormation or GCP/Azure CDK Strong scripting skills in Python, Bash or PowerShell for automation tasks Familiarity with CI/CD tools (eg: Gitlab CI/CD, Jenkins) and experience integrating them with GCP/Azure Knowledge of networking fundamentals and experience with GCP/Azure VPC, security groups, VPN and routing Proficiency in monitoring and logging tools such as native cloud tools or third-party tools like Datadog and Splunk Cybersecurity Expertise: Understanding of cybersecurity principles, best practices, and frameworks. Knowledge of encryption, identity management, access controls, and other security measures within cloud environments. Preferably with certifications such as GCP/Azure Certified DevOps Engineer, GCP/Azure Certified SysOps Administrator, GCP/Azure Certified Solutions Architect Skills and behavioral competencies Excellent problem solving and troubleshooting abilities Result orientation, influence & impact Empowerment & accountability with the ability to work independently Team spirit, building relationships, collective accountability Excellent oral and written communication skills for documenting and sharing information with technical and non-technical stakeholders Language skills English mandatory What’s in it for the candidate Be part of and contribute to a once-in-a-lifetime change journey Join a dynamic team that is going to tackle big bets Have fun and work at a high pace About Us Syensqo is a science company developing groundbreaking solutions that enhance the way we live, work, travel and play. Inspired by the scientific councils which Ernest Solvay initiated in 1911, we bring great minds together to push the limits of science and innovation for the benefit of our customers, with a diverse, global team of more than 13,000 associates. Our solutions contribute to safer, cleaner, and more sustainable products found in homes, food and consumer goods, planes, cars, batteries, smart devices and health care applications. Our innovation power enables us to deliver on the ambition of a circular economy and explore breakthrough technologies that advance humanity. At Syensqo, we seek to promote unity and not uniformity. We value the diversity that individuals bring and we invite you to consider a future with us, regardless of background, age, gender, national origin, ethnicity, religion, sexual orientation, ability or identity. We encourage individuals who may require any assistance or accommodations to let us know to ensure a seamless application experience. We are here to support you throughout the application journey and want to ensure all candidates are treated equally. If you are unsure whether you meet all the criteria or qualifications listed in the job description, we still encourage you to apply. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Udaipur, Rajasthan, India

On-site

Linkedin logo

Job Summary We are looking for an experienced DevOps Lead to join our technology team and drive the design, implementation, and optimization of our DevOps processes and infrastructure. You will lead a team of engineers to ensure smooth CI/CD workflows, scalable cloud environments, and high availability for all deployed applications. This is a hands-on leadership role requiring a strong technical foundation and a collaborative mindset. Key Responsibilities Lead the DevOps team and define best practices for CI/CD pipelines, release management, and infrastructure automation. Design, implement, and maintain scalable infrastructure using tools such as Terraform, CloudFormation, or Ansible. Manage and optimize cloud services (e.g., AWS, Azure, GCP) for cost, performance, and security. Oversee monitoring, alerting, and logging systems (e.g., Prometheus, Grafana, ELK, Datadog). Implement and enforce security, compliance, and governance policies in cloud environments. Collaborate with development, QA, and product teams to ensure reliable and efficient software delivery. Lead incident response and root cause analysis for production issues. Evaluate new technologies and tools to improve system efficiency and reliability. Required Qualifications Bachelor's or master's degree in computer science, Engineering, or related field. 5+ years of experience in DevOps or SRE roles, with at least 2 years in a lead or managerial capacity. Strong experience with CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI/CD). Expertise in infrastructure as code (IaC) and configuration management. Proficiency in scripting languages (e.g., Python, Bash). Deep knowledge of containerization and orchestration tools (e.g., Docker, Kubernetes). Experience with version control (Git), artifact repositories, and deployment strategies (blue/green, canary). Solid understanding of networking, DNS, firewalls, and security protocols. Preferred Qualifications Certifications (e.g., Azure Certified DevOps Engineer, CKA/CKAD). Experience in a regulated environment (e.g., HIPAA, PCI, SOC2). Exposure to observability platforms and chaos engineering practices. Background in agile/scrum Skills : Strong leadership and team-building capabilities. Excellent problem-solving and troubleshooting skills. Clear and effective communication, both written and verbal. Ability to work under pressure and adapt quickly in a fast-paced environment. (ref:hirist.tech) Show more Show less

Posted 5 days ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Company: Our Client is a leading Indian multinational IT services and consulting firm. It provides digital transformation, cloud computing, data analytics, enterprise application integration, infrastructure management, and application development services. The company caters to over 700 clients across various industries, including banking and financial services, manufacturing, technology, media, retail, and travel and hospitality. Its industry-specific solutions are designed to address complex business challenges by combining domain expertise with deep technical capabilities. With a global workforce of over 80,000 professionals and a presence in more than 50 countries. Job Title: Python Developer Locations: PAN INDIA Experience: 5-10 Years (Relevant) Employment Type: Contract to Hire Work Mode: Work From Office Notice Period: Immediate to 15 Days JOB DESCRIPTION: Cloud Computing Proficiency in cloud platforms such as AWS, Google Cloud or Azure Containerization Experience with Docker and Kubernetes for container orchestration CICD Strong knowledge of continuous integration and continuous delivery processes using tools like Jenkins, GitLab CI, or Azure DevOps Infrastructure as Code IaC Experience with IaC tools such as Terraform or CloudFormation Scripting and Programming Proficiency in scripting languages, eg, Python, Bash, and programming languages, eg, Java, Go Monitoring and Logging Familiarity with monitoring tools eg, Prometheus, Grafana, and logging tools, eg, ELK stack Security Knowledge of security best practices and tools for securing platforms and data Networking: Understanding of networking concepts and technologies Database Management Experience with both SQL and No-SQL databases Automation Proficiency in automation tools and frameworks Version Control Strong knowledge of version control systems like Git Development Understanding, Solid understanding of the software development life cycle (SDLC), and experience working closely with development teams Mandatory Skills: Azure API Management, Azure BLOB, Azure Cloud Architecture, Azure Container Apps, Azure Cosmos DB, Azure DevOps, Azure Event Grid, Azure Functions, Azure IOT, Docker, Kubernetes Show more Show less

Posted 5 days ago

Apply

Exploring GitLab Jobs in India

GitLab is a popular DevOps platform that is widely used by companies in India for version control, collaboration, and CI/CD automation. As more and more organizations adopt DevOps practices, the demand for GitLab professionals in India is on the rise. Job seekers with GitLab skills can explore a wide range of opportunities in various industries across the country.

Top Hiring Locations in India

  1. Bangalore - Known as the Silicon Valley of India, Bangalore has a thriving tech industry with many companies actively hiring for GitLab roles.
  2. Pune - Another major IT hub in India, Pune offers plenty of opportunities for GitLab professionals.
  3. Hyderabad - With a growing tech scene, Hyderabad is a great place to look for GitLab jobs.
  4. Chennai - The capital city of Tamil Nadu is home to many IT companies that are in need of GitLab experts.
  5. Gurgaon - Located near the national capital, Gurgaon is a hub for IT and finance companies that frequently hire GitLab professionals.

Average Salary Range

The average salary range for GitLab professionals in India varies based on experience and location. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.

Career Path

In the GitLab job market in India, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, DevOps Engineer or DevOps Manager.

Related Skills

In addition to GitLab expertise, employers in India often look for professionals with skills in CI/CD pipelines, Docker, Kubernetes, Jenkins, AWS/Azure/GCP, and scripting languages like Python or Shell scripting.

Interview Questions

  • What is GitLab and how does it differ from other version control systems? (basic)
  • How would you set up a CI/CD pipeline using GitLab CI? (medium)
  • Can you explain the difference between a Git commit and a Git push? (basic)
  • What is a GitLab runner and how does it work? (medium)
  • How do you handle merge conflicts in GitLab? (medium)
  • Explain the purpose of a .gitignore file in a Git repository. (basic)
  • How would you integrate GitLab with Kubernetes for deploying applications? (advanced)
  • What are Git hooks and how can they be useful in a GitLab workflow? (medium)
  • Describe how GitLab handles branching and merging of code changes. (medium)
  • What security measures would you implement to secure a GitLab repository? (advanced)
  • How do you revert a commit in GitLab? (basic)
  • Explain the difference between GitLab CE and GitLab EE. (basic)
  • How would you troubleshoot a failing GitLab pipeline? (medium)
  • What is GitLab Pages and how can it be used for hosting websites? (medium)
  • Describe the purpose of GitLab artifacts and how they are used in CI/CD pipelines. (medium)
  • How do you manage permissions and access control in a GitLab repository? (medium)
  • What are Git submodules and when would you use them in a GitLab project? (medium)
  • Explain the advantages of using GitLab over other version control systems like SVN. (basic)
  • How do you handle large binary files in a GitLab repository? (medium)
  • What is GitLab's built-in issue tracking system and how does it integrate with the Git workflow? (medium)
  • Describe the process of forking a repository in GitLab. (basic)
  • How do you use GitLab's code review feature to collaborate with team members? (medium)
  • What is GitLab's Auto DevOps feature and how does it simplify the CI/CD process? (medium)
  • How would you monitor the performance of a GitLab CI/CD pipeline? (medium)
  • Can you explain the concept of GitLab's "Merge Requests" and how they facilitate code collaboration? (medium)

Closing Remark

As the demand for GitLab professionals continues to grow in India, now is a great time for job seekers to enhance their skills and apply confidently for exciting opportunities in the field. Prepare well, showcase your expertise, and seize the GitLab job that aligns with your career goals. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies