Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Greater Kolkata Area
Remote
Data Engineer - Google Cloud Location : Remote, India About Us Aviato Consulting is looking for a highly skilled and motivated Data Engineer to join our expanding team. This role is ideal for someone with a deep understanding of cloud-based data solutions, with a focus on Google Cloud (GCP) and associated technologies. GCP certification is mandatory for this position to ensure the highest level of expertise and professionalism. You will work directly with clients, translating their business requirements into scalable data solutions, while providing technical expertise and guidance. Key Responsibilities Client Engagement : Work closely with clients to understand business needs, gather technical requirements, and design solutions leveraging GCP services. Data Pipeline Design & Development : Build and manage scalable data pipelines using tools such as Apache Beam, Cloud Dataflow, and Cloud Composer. Data Warehousing & Lake Solutions : Architect, implement, and optimize BigQuery-based data lakes and warehouses. Real-Time Data Processing : Implement and manage streaming data pipelines using Kafka, Pub/Sub, and similar technologies. Data Analysis & Visualization : Create insightful data dashboards and visualizations using tools like Looker, Data Studio, or Tableau. Technical Leadership & Mentorship : Provide guidance and mentorship to team members and clients, helping them leverage the full potential of Google Cloud. Required Qualifications Experience : 5+ years as a Data Engineer working with cloud-based platforms. Proven experience in Python with libraries like Pandas and NumPy. Strong understanding and experience with FastAPI for building APIs. Expertise in building data pipelines using Apache Beam, Cloud Dataflow, or similar tools. Solid knowledge of Kafka for real-time data streaming. Proficiency with BigQuery, Google Pub/Sub, and other Google Cloud services. Familiarity with Apache Hadoop for distributed data processing. Technical Skills Strong understanding of data architecture and processing techniques. Experience with big data environments and tools like Apache Hadoop. Solid understanding of ETL pipelines, data ingestion, transformation, and storage. Knowledge of data modeling, data warehousing, and big data management principles. Certifications Google Cloud certification (Professional Data Engineer, Cloud Architect) is mandatory for this role. Soft Skills Excellent English communication skills. Client-facing experience and the ability to manage client relationships effectively. Strong problem-solving skills with a results-oriented approach. Preferred Qualifications Visualization Tools : Experience with tools like Looker, Power BI, or Tableau. Benefits Competitive salary and benefits package. Opportunities to work with cutting-edge cloud technologies with large customers. Collaborative work environment that encourages learning and professional growth. A chance to work on high-impact projects for leading clients in diverse industries. If you're passionate about data engineering, cloud technologies, and solving complex data problems for clients, wed love to hear from you! (ref:hirist.tech) Show more Show less
Posted 2 days ago
7.0 years
0 Lacs
Greater Kolkata Area
On-site
Role : Senior DevOps Consultant About The Role We are seeking an experienced Senior DevOps Consultant to join our team of technology professionals. The ideal candidate will bring extensive expertise in DevOps practices, cloud platforms, and infrastructure automation, with a strong focus on implementing continuous integration/continuous delivery (CI/CD) pipelines and cloud-native solutions. This role requires an individual who can lead complex transformation initiatives, provide technical mentorship, and deliver high-quality solutions for our enterprise clients. Position Overview As a Senior DevOps Consultant, you will serve as a technical leader and subject matter expert for our DevOps practice. You will be responsible for designing, implementing, and optimizing DevOps strategies and toolchains, with particular emphasis on automation, infrastructure as code, and cloud-native architectures. You will work directly with clients to understand their business requirements and translate them into effective technical solutions that enhance development velocity, operational efficiency, and system reliability. Key Responsibilities Lead complex DevOps transformation initiatives and implementation projects for enterprise clients. Design and implement comprehensive CI/CD pipelines across various platforms and technologies. Develop and implement infrastructure as code solutions using tools like Terraform, Ansible, or ARM templates. Create cloud-native architectures leveraging container technologies and orchestration platforms. Provide technical mentorship and guidance to junior team members and client development teams. Develop automation solutions to streamline build, test, and deployment processes. Conduct technical assessments of existing environments and provide strategic recommendations. Create detailed technical documentation and knowledge transfer materials. Implement monitoring, observability, and security solutions within DevOps practices. Present technical concepts and solutions to client stakeholders at all levels. Stay current with DevOps methodologies, tools, and emerging technologies. Required Qualifications 7+ years of hands-on experience in IT, with at least 5 years specifically focused on DevOps practices. Advanced expertise in CI/CD tools and methodologies (e., Azure DevOps, Jenkins, GitLab CI, GitHub Actions). Strong experience with at least one major cloud platform (Azure, AWS, or GCP), preferably Azure. In-depth knowledge of infrastructure as code using tools like Terraform, ARM templates, or CloudFormation. Expertise in containerization technologies (Docker) and orchestration platforms (Kubernetes). Deep understanding of scripting and automation using PowerShell, Bash, Python, or similar languages. Experience implementing monitoring and observability solutions. Strong understanding of networking concepts and security best practices. Experience with version control systems, particularly Git-based workflows. Demonstrated experience leading DevOps transformation initiatives at enterprise scale. Excellent documentation, communication, and presentation skills. Ability to translate complex technical concepts for non-technical stakeholders. Required Certifications Microsoft Certified: DevOps Engineer Expert. Microsoft Certified: Azure Administrator Associate. Microsoft Certified: Azure Solutions Architect Expert or AWS Certified Solutions Architect. Preferred Qualifications Experience with multiple cloud platforms (Azure, AWS, GCP). Knowledge of database management and data pipeline implementations. Experience with service mesh technologies (e., Istio, Linkerd). Understanding of microservices architectures and design patterns. Experience with security scanning and DevSecOps implementation. Background in mentoring junior consultants and developing team capabilities. Experience with Site Reliability Engineering (SRE) practices. Preferred Certifications Certified Kubernetes Administrator (CKA) or Certified Kubernetes Application Developer (CKAD). HashiCorp Certified: Terraform Associate. AWS Certified DevOps Engineer Professional (if working with AWS). Google Professional Cloud DevOps Engineer (if working with GCP). Professional Skills Exceptional problem-solving and troubleshooting abilities. Strong project management and organizational skills. Excellent verbal and written communication. Client-focused mindset with strong consulting capabilities. Ability to work both independently and as part of a team. Adaptability and willingness to learn new technologies. Strong time management and prioritization skills. (ref:hirist.tech) Show more Show less
Posted 2 days ago
14.0 years
0 Lacs
Greater Kolkata Area
On-site
Position Overview We are seeking a dynamic and experienced Program Manager to lead and oversee the Data Governance Program for a large banking organization. The Program Manager will be responsible for the successful execution of data governance initiatives, ensuring compliance with regulatory requirements, promoting data quality, and fostering a culture of data stewardship across the enterprise. This role requires a strategic thinker with exceptional leadership, communication, and organizational skills to align cross-functional teams and drive the adoption of governance frameworks. Key Responsibilities Program Leadership : Develop and execute a comprehensive Data Governance strategy aligned with the organization's objectives and regulatory requirements. Act as a liaison between senior leadership, stakeholders, and cross-functional teams to ensure program alignment and success. Drive organizational change to establish a culture of data governance and stewardship. Great focus on program risk identification and timely reporting and devising action to address it. Cost benefit analysis and justification to investments. Planning And Project Management Project planning, scheduling & tracking Work prioritization and resource planning Risk identification and reporting Team planning and management Status reporting Governance Framework Implementation Establish and manage a robust Data Governance framework, including policies, standards, roles, and responsibilities. Implement data cataloging, metadata management, and data lineage tools to enhance data visibility and accessibility. Oversee the creation of workflows and processes to ensure adherence to governance policies. Stakeholder Engagement Reports to CXO level executives with program status update, risk management and outcomes. Collaborate with business units, IT teams, and compliance officers to identify governance priorities and resolve data-related challenges. Facilitate the Data Governance Council meetings and ensure effective decision-making. Serve as a point of contact for internal and external auditors regarding data governance-related queries. Compliance And Risk Management Ensure adherence to industry regulations and banking-specific compliance requirements. Identify and mitigate risks related to data usage, sharing, and security. Monitoring And Reporting Develop key performance indicators (KPIs) and metrics to measure the effectiveness of the Data Governance Program. Provide regular updates to CXO level executive leadership on program status, risks, and outcomes. Prepare and present audit and compliance reports as required. Team Leadership And Mentorship Lead cross-functional teams, including data stewards, analysts, and governance professionals. Provide training and mentoring to promote awareness and understanding of data governance practices. Technical Expertise Understanding of data engineering principles and practices: Good understanding of data pipelines, data storage solutions, data quality concepts, and data security is crucial. Familiarity with data engineering tools and technologies: This may include knowledge of ETL/ELT tools, Informatica IDMC, MDM, data warehousing solutions, Collabra data quality, cloud platforms (AWS, Azure, GCP), and data governance frameworks Qualifications Bachelor's degree in computer science, Data Management, Business Administration, or a related field; MBA or equivalent experience preferred. 14+ years of experience in program management, with at least 6+ years focused on data governance or data management with MDM in the banking or financial services sector. Strong knowledge of data governance frameworks, principles, and tools (e.g., Collibra, Informatica, Alation). Experience with regulatory compliance requirements for the banking industry, such as GDPR, CCPA, BCBS 239, and AML/KYC regulations. Proven track record of successfully managing large, complex programs with cross-functional teams. Excellent communication and stakeholder management skills, with the ability to influence and align diverse groups. Familiarity with data analytics, data quality management, and enterprise architecture concepts. Certification in program or project management (e.g., PMP, PRINCE2) or data governance (e.g., DGSP, CDMP) is a plus. Key Competencies Strong strategic thinking and problem-solving skills. Ability to work under pressure and manage multiple priorities. Exceptional leadership and interpersonal skills. Proficiency in program management tools and methodologies. Strong analytical and decision-making capabilities (ref:hirist.tech) Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
Role : Senior Consulting Engineer (Microsoft) 100% English Fluency is required this is a must. About The Role As a Senior Consulting Engineer you will provide expert consulting services and guidance to our clients on various cloud technologies, including Azure Entra ID, Azure, AWS, GCP, MDM, Active Directory, networking, and other related cloud solutions. The Senior Cloud Consultant will play a crucial role in assisting clients with cloud strategy, implementation, optimization, ongoing support, and various migration projects, including cloud tenant migrations, cross-platform migrations, identity migrations, mail migrations, and M365 migrations. Be the go-to person for Microsoft 365, Azure Cloud and all associated products and services. You should be a highly skilled and experienced Senior Consultant with a strong background in Microsoft 365 (M365) security and cloud solutions. The ideal candidate will have a deep understanding of securing privileged access, implementing Zero Trust administration, and deploying Microsoft Defender products, Microsoft Purview, and Microsoft Sentinel. This role also requires expertise in Azure Landing Zones, management group administration, and the overall Microsoft security ecosystem. You will work closely with clients, guiding them through the design and implementation of modern security solutions, ensuring compliance with best practices and regulatory requirements. This role is pivotal in delivering successful engagements that help secure enterprise environments, enhance threat detection, and streamline cloud operations. Key Responsibilities Must be well-spoken and comfortable talking with clients. Provide expert consulting services and guidance to clients for both supporting and implementing cloud technologies such as EntraID, Azure, AWS, GCP, MDM, Active Directory, networking solutions, and other cloud infrastructure solutions. Support and implement Zero Trust architectures across M365 and Azure environments. Secure privileged identities and administrative roles, leveraging tools such as Azure AD Privileged Identity Management (PIM) and Conditional Access policies. Establish robust security baselines for administrative roles, ensuring least-privilege access across all environments. Lead M365 security and compliance projects with a focus on privileged access management, Zero Trust security models, and advanced security features. Experienced in editing and writing PowerShell scripts and working with CLIs. Lead the implementation and migration of client workloads to cloud environments, ensuring seamless transitions and minimizing downtime. Implement and configure Microsoft Defender products (Defender for Identity, Defender for Endpoint, Defender for Cloud Apps, etc.) across client environments. Conduct cloud readiness assessments, identify potential risks and challenges, and provide mitigation strategies. Plan and execute cloud tenant migrations, cross-platform migrations (e.g., AWS to Azure, on-premises to GCP), identity migrations, mail migrations, and M365 migrations. Develop and deliver comprehensive training and knowledge transfer sessions to client teams on cloud technologies and best practices. Deploy and operationalize Microsoft Purview for compliance, data governance, and information protection solutions. Support, deploy, and configure Microsoft Sentinel for advanced threat detection, monitoring, and response. Provide expert guidance on supporting and deploying Azure Landing Zones and managing Azure management groups for optimal cloud governance. Collaborate with clients to understand their business requirements, assess their current infrastructure, and develop cloud strategies aligned with their goals. Capable of both providing hands on keyboard as well as guiding and walking customers through performing technical tasks and performing technical activities during implementations or migrations of solutions. Experience in writing technical solution documents including implementation guides, technical design documents, operations guides, standard operational procedures, knowledge articles and status reports. Ability to perform and conduct workshops and meetings to develop client solutions. Maintaining industry technology certifications required and ability to obtain new certifications as required based on technology trends and organizational requirements. Stay up-to-date with the latest trends, technologies, and advancements in the cloud computing industry, and proactively share insights with clients. Mentor and guide junior consultants, fostering knowledge sharing and professional growth within the team. Educate customers of all sizes on the value proposition of cloud solutions. Collaborate with cross-functional teams, including developers, architects, and project managers, to ensure successful cloud projects. Using company prescribed methodologies, can demonstrate the production of several medium to large scale designs, including some which are not based on an existing pattern, or which have substantial customization and/or contain solution components from contributing specialists. Participate and team up with sales, Enterprise Architect, Delivery and Partner ecosystem to provide complete solutions to customers. Demonstrated ability to adapt to new technologies and learn quickly. Ability to handle multiple priorities and initiatives simultaneously. Experience Requirements 5+ years of experience designing, supporting, and implementing large-scale, complex system architectures. 5+ years of experience working in Microsoft 365, Azure, and related security solutions. 5+ years of experience in Cloud migrations, implementations and supporting cloud solutions. Expertise in securing Microsoft 365 environments with tools like Azure AD PIM, Conditional Access, Microsoft Defender suite, and Azure Information Protection. Proficiency in supporting and deploying Zero Trust frameworks and identity management solutions. Strong hands-on experience with Microsoft Sentinel for security monitoring, alerting, and incident response. In-depth knowledge of Microsoft Purview for data governance and compliance. Understanding of Azure Landing Zones and management group administration. Familiarity with identity and access management (IAM) best practices, particularly in hybrid cloud environments. Required Qualifications Bachelor's degree in Computer Science, Information Technology, or a related field. (or equivalent experience). Minimum of 5 years of hands-on experience in cloud consulting, with a strong focus on EntraID, Azure, MDM, Active Directory, and networking technologies. AWS and GPC is a plus. Proven track record of successfully delivering complex cloud projects, migrations, and implementations for enterprise clients. In-depth knowledge of cloud architecture, security, scalability, and cost optimization best practices. Expertise in cloud automation, DevOps methodologies, and infrastructure as code (IaC) technologies is a plus. Strong understanding of enterprise IT infrastructure, networking, and security principles. Excellent problem-solving, analytical, and critical thinking skills. Strong communication and presentation skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. Proven ability to work independently and lead cross-functional teams in a collaborative environment. Relevant cloud certifications (e.g., Microsoft Azure Solutions Architect Expert, AWS Certified Solutions Architect Professional, Google Cloud Professional Cloud Architect) are highly desirable. Additional Qualifications Azure Automation. Microsoft Active Directory and Azure Active Directory / Entra ID. Azure Site Recovery. Group Policy Management. DevOps experience with well architected, high performing, and scalable micro-service based architectures that are resilient and recoverable is also highly desirable. Define and enforce Well Architected framework for major cloud providers : operational excellence, security, reliability, performance efficiency, and cost optimization. Certifications (Preferred) Microsoft Certified : Security, Compliance, and Identity Fundamentals. Microsoft Certified : Azure Security Engineer Associate. Microsoft Certified : Identity and Access Administrator Associate. Microsoft Certified : Security Operations Analyst Associate. Microsoft 365 Certified : Enterprise Administrator Expert. (ref:hirist.tech) Show more Show less
Posted 2 days ago
8.0 years
0 Lacs
Greater Kolkata Area
On-site
Key Responsibilities Data Science Leadership : Utilize in-depth knowledge of data science and data analytics to architect and drive strategic initiatives across departments. Stakeholder Collaboration : Work closely with cross-functional teams to define and implement data strategies aligned with business objectives. Model Development : Design, build, and deploy predictive models and machine learning algorithms to solve complex business problems and uncover actionable insights. Integration and Implementation : Collaborate with IT and domain experts to ensure smooth integration of data science models into existing business workflows and systems. Innovation and Optimization : Continuously evaluate new data tools, methodologies, and technologies to enhance analytical capabilities and operational efficiency. Data Governance : Promote data quality, consistency, and security standards across the organization. Required Qualifications Bachelors or Masters Degree in Economics, Statistics, Data Science, or a related field. A minimum of 8 years of relevant experience in data analysis, data science, or analytics roles. At least 3 years of direct experience as a Data Scientist, preferably in enterprise or analytics lab environments. Possession of at least one recognized data science certification, such as : Certified Analytics Professional (CAP) Google Professional Data Engineer Proficiency in data visualization and storytelling tools and libraries, such as Matplotlib, Seaborn, and Tableau. Strong foundation in statistical modeling and risk analytics, with proven experience building and validating such models. Preferred Skills And Attributes Strong programming skills in Python, R, or similar languages. Experience with cloud-based analytics platforms (AWS, GCP, or Azure). Familiarity with data engineering concepts and tools (e.g., SQL, Spark, Hadoop). Excellent problem-solving, communication, and stakeholder engagement skills. Ability to manage multiple projects and mentor junior team members. (ref:hirist.tech) Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Greater Kolkata Area
On-site
About Sleek Through proprietary software and AI, along with a focus on customer delight, Sleek makes the back-office easy for micro SMEs. We give Entrepreneurs time back to focus on what they love doing growing their business and being with customers. With a surging number of Entrepreneurs globally, we are innovating in a highly lucrative space. We Operate 3 Business Segments Corporate Secretary : Automating the company incorporation, secretarial, filing, Nominee Director, mailroom and immigration processes via custom online robots and SleekSign. We are the market leaders in Singapore with : 5% market share of all new business incorporations. Accounting & Bookkeeping : Redefining what it means to do Accounting, Bookkeeping, Tax and Payroll thanks to our proprietary SleekBooks ledger, AI tools and exceptional customer service. FinTech payments : Overcoming a key challenge for Entrepreneurs by offering digital banking services to new businesses. Sleek launched in 2017 and now has around 15,000 customers across our offices in Singapore, Hong Kong, Australia and the UK. We have around 450 staff with an intact startup mindset. We have achieved >70% compound annual growth in Revenue over the last 5 years and as a result have been recognised by The Financial Times, The Straits Times, Forbes and LinkedIn as one of the fastest growing companies in Asia. Role Backed by world-class investors, we are on track to be one of the few cash flow positive, tech-enabled unicorns based out of The Role : We are looking for an experienced Senior Data Engineer to join our growing team. As a key member of our data team, you will design, build, and maintain scalable data pipelines and infrastructure to enable data-driven decision-making across the organization. This role is ideal for a proactive, detail-oriented individual passionate about optimizing and leveraging data for impactful business : Work closely with cross-functional teams to translate our business vision into impactful data solutions. Drive the alignment of data architecture requirements with strategic goals, ensuring each solution not only meets analytical needs but also advances our core objectives. 3, Be pivotal in bridging the gap between business insights and technical execution by tackling complex challenges in data integration, modeling, and security, and by setting the stage for exceptional data performance and insights. Shape the data roadmap, influence design decisions, and empower our team to deliver innovative, scalable, high-quality data solutions every : Achieve and maintain a data accuracy rate of at least 99% for all business-critical dashboards by start of day (accounting for corrections and job failures), with a 24-business hour detection of error and 5-day correction SLA. 95% of data on dashboards originates from technical data pipelines to mitigate data drift. Set up strategic dashboards based on Business Needs which are robust, scalable, easy and quick to operate and maintain. Reduce costs of data warehousing and pipelines by 30%, then maintaining costs as data needs grow. Achieve 50 eNPS on data services (e.g. dashboards) from key business : Data Pipeline Development : Design, implement, and optimize robust, scalable ETL/ELT pipelines to process large volumes of structured and unstructured data. Data Modeling : Develop and maintain conceptual, logical, and physical data models to support analytics and reporting requirements. Infrastructure Management : Architect, deploy, and maintain cloud-based data platforms (e.g. , AWS, GCP). Collaboration : Work closely with data analysts, business owners, and stakeholders to understand data requirements and deliver reliable solutions, including designing and implementing robust, efficient and scalable data visualization on Tableau or LookerStudio. Data Governance : Ensure data quality, consistency, and security through robust validation and monitoring frameworks. Performance Optimization : Monitor, troubleshoot, and optimize the performance of data systems and pipelines. Innovation : Stay up to date with the latest industry trends and emerging technologies to continuously improve data engineering & Qualifications : Experience : 5+ years in data engineering, software engineering, or a related field. Technical Proficiency Proficiency in working with relational databases (e.g. , PostgreSQL, MySQL) and NoSQL databases (e.g. , MongoDB, Cassandra). Familiarity with big data frameworks like Hadoop, Hive, Spark, Airflow, BigQuery, etc. Strong expertise in programming languages such as Python, NodeJS, SQL etc. Cloud Platforms : Advanced knowledge of cloud platforms (AWS, or GCP) and their associated data services. Data Warehousing : Expertise in modern data warehouses like BigQuery, Snowflake or Redshift, etc. Tools & Frameworks : Expertise in version control systems (e.g. , Git), CI/CD, JIRA pipelines. Big Data Ecosystems / BI : BigQuery, Tableau, LookerStudio. Industry Domain Knowledge : Google Analytics (GA), Hubspot, Accounting/Compliance etc. Soft Skills : Excellent problem-solving abilities, attention to detail, and strong communication Qualifications : Degree in Computer Science, Engineering, or a related field. Experience with real-time data streaming technologies (e.g. , Kafka, Kinesis). Familiarity with machine learning pipelines and tools. Knowledge of data security best practices and regulatory The Interview Process : The successful candidate will participate in the below interview stages (note that the order might be different to what you read below). We anticipate the process to last no more than 3 weeks from start to finish. Whether the interviews are held over video call or in person will depend on your location and the role. Case study. A : 60 minute chat with the Data Analyst, where they will give you some real-life challenges that this role faces, and will ask for your approach to solving them. Career deep dive. A : 60 minute chat with the Hiring Manager (COO). They'll discuss your last 1-2 roles to understand your experience in more detail. Behavioural fit assessment. A : 60 minute chat with our Head of HR or Head of Hiring, where they will dive into some of your recent work situations to understand how you think and work. Offer + reference interviews. We'll Make a Non-binding Offer Verbally Or Over Email, Followed By a Couple Of Short Phone Or Video Calls With References That You Provide To For Background Screening Please be aware that Sleek is a regulated entity and as such is required to perform different levels of background checks on staff depending on their role. This may include using external vendors to verify the below : Your education. Any criminal history. Any political exposure. Any bankruptcy or adverse credit history. We will ask for your consent before conducting these checks. Depending on your role at Sleek, an adverse result on one of these checks may prohibit you from passing probation. (ref:hirist.tech) Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Summary We are seeking a skilled and proactive Senior DevOps Engineer with 6+ years of experience to join our dynamic engineering team. In this role, you will drive automation, streamline CI/CD pipelines, manage cloud infrastructure, and ensure system reliability, scalability, and security. Youll collaborate closely with development, QA, and IT teams to bridge gaps between operations and developmentdelivering faster, more reliable deployments. Key Responsibilities Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or GitHub Actions. Manage and automate cloud infrastructure (AWS, Azure, or GCP) using Infrastructure as Code (Terraform, CloudFormation, Pulumi, etc.). Ensure high availability and reliability of production environments by implementing monitoring, alerting, and logging solutions (Prometheus, Grafana, ELK, Datadog, etc.). Automate configuration management and system provisioning using Ansible, Chef, Puppet, or SaltStack. Implement containerization using Docker and orchestration using Kubernetes or ECS. Drive adoption of DevOps best practices across the engineering organization. Optimize performance and cost of cloud resources. Conduct security reviews and ensure compliance with industry standards (e.g., ISO, SOC 2, HIPAA, etc.). Manage version control workflows using Git and branching strategies. Troubleshoot production issues and perform root cause analysis in collaboration with development and support teams. Mentor junior team members and support knowledge sharing across teams. Qualifications Bachelors Degree (or higher) with a major in Information Technology, Computer Science, Engineering, or related field preferred. Overall, 6 + Years of IT experience and 2+ years experience in Shell / Perl scripting Experience in DevOps, CI/CD implementation. Experience in containerization platforms (one of them) - RedHat OpenShift/ Kubernetes/ AWS EKS Experience with Deployment Automation & Orchestration platform (Jenkins). Experience with collaborative version control systems (Git/ GitHub / GitLab / Bitbucket). Experience with configuration management tools like Chef, Ansible, Terraform, CloudFormation, Puppet. Experience with cloud computing and container solutions such as Amazon EC2, Docker. Experience with scripting languages (Groovy, Python, Jenkins DSL) to develop automation tools. Experience with virtualization technologies such as VMWare. Experience with managing code repositories and best practices as it relates to DevOps. Knowledge of basic Java constructs in both web and thick client applications. Familiarity with Enterprise Oracle DB deployments. (ref:hirist.tech) Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Greater Kolkata Area
Remote
About Sleek Through proprietary software and AI, along with a focus on customer delight, Sleek makes the back-office easy for micro SMEs. We give Entrepreneurs time back to focus on what they love doing growing their business and being with customers. With a surging number of Entrepreneurs globally, we are innovating in a highly lucrative space. We Operate 3 Business Segments Corporate Secretary : Automating the company incorporation, secretarial, filing, Nominee Director, mailroom and immigration processes via custom online robots and SleekSign. We are the market leaders in Singapore with : 5% market share of all new business incorporations. Accounting & Bookkeeping : Redefining what it means to do Accounting, Bookkeeping, Tax and Payroll thanks to our proprietary SleekBooks ledger, AI tools and exceptional customer service. FinTech payments : Overcoming a key challenge for Entrepreneurs by offering digital banking services to new businesses. Sleek launched in 2017 and now has around 15,000 customers across our offices in Singapore, Hong Kong, Australia and the UK. We have around 450 staff with an intact startup mindset. We have achieved >70% compound annual growth in Revenue over the last 5 years and as a result have been recognised by The Financial Times, The Straits Times, Forbes and LinkedIn as one of the fastest growing companies in Asia. Backed by world-class investors, we are on track to be one of the few cash flow positive, tech-enabled unicorns based out of Singapore. Sleek is also a proudly certified B Corp. Since we started our journey in 2017, we've been committed to building Sleek as a force for good. In just over 5 years, we've joined a community of industry leaders like Patagonia, Ben & Jerry's, and P&G who are building an inclusive, equitable, and a regenerative economy. We have planted over 29,271 trees to reforest our ecosystem and saved 7 tons of paper from landfills by processing over 1. 4M pages through SleekSign. We aim to be Carbon Neutral by The Role : Mission As the Fullstack AI Engineer, you will be at the forefront of transforming our technology landscape through comprehensive, end-to-end AI solutions. Your mission is to architect, develop, and deploy scalable systems that leverage advanced Artificial Intelligence and automation capabilities to drive operational efficiency, product innovation, and user satisfaction. You will work closely with cross-functional teamsranging from data scientists and product managers to UX/UI designers and DevOps professionalsto ensure seamless integration of AI-powered features where they deliver the greatest value, while maintaining a robust software architecture and intuitive user experience. Role Your role will be instrumental in ensuring our next-generation platform not only meets client and internal needs but also sets new standards for innovation, reliability, and excellence in the AI-driven software : Deliver High-Quality Code : Consistently submit well-documented, maintainable, and robust code through Pull Requests (PRs) that include comprehensive tests (unit, integration) and adhere to team coding standards and best practices. Aim for high test coverage (e.g., >80%) on new code. Apply Strong Fundamentals : Design and implement features demonstrating strong understanding and application of software architecture principles (e.g., SOLID, clean architecture), efficient database design (schema, indexing, query optimization), and API best practices (RESTful design, security). Leverage AI Tools Effectively : Integrate AI coding assistants and tools into your daily workflow to demonstrably improve development speed (e.g., reducing boilerplate, accelerating test writing, assisting debugging) without compromising code quality, maintainability, or introducing unnecessary complexity. Be prepared to share effective prompts and techniques. Contribute to System Stability : Keep production rollback rate related to your contributions under 2% by enforcing comprehensive testing, leveraging CI/CD pipelines effectively, and adhering to established DevOps best practices. Engage in Collaborative Development : Actively participate in code reviews, providing timely, constructive feedback based on engineering principles and best practices. Respond proactively to feedback on your own PRs, contributing positively to team velocity and knowledge : Professional Background : You've excelled as a Fullstack Engineer, AI Specialist, or a similar role for 5+ years, developing and deploying scalable applications that harness the power of AI. Front-End & Back-End : Fluent in VueJS/React and NodeJS (TypeScript), with hands-on experience in microservices that are modular, reusable, and resilient. Databases & Architecture : Comfortable with MongoDB/Supabase, ESR/RPC, and adept at designing solutions aligned with Clean/Hexagonal architecture, SOLID principles, ACID, and idempotency. AI Frameworks & Tools : Skilled in advanced AI ecosystemsLangChain, LlamaIndex, etc.with experience in prompt engineering, RAG, and an interest in specialized hardware like Groq. Quality Advocate : Passionate about maintaining high testing standards, security patches, and thorough documentation for AI pipelines, ensuring consistent and reliable performance. Interpersonal Skills : Able to communicate fluidly with Agile teams, bridging gaps between product, UX, data science, and engineering to deliver successful AI initiatives. Cloud & DevOps Expertise : Knowledge of AWS/GCP, when to leverage serverless/PaaS, and experience deploying microservices to Kuberneteswith a keen eye for best practices in MLOps and AI model lifecycle management. Collaboration & Independence : Equally comfortable working solo or in a team environment, driving projects from ideation to production while inspiring those around : Humility and kindness : Humility is a core attribute we hire for, which means we have a culture of not taking ourselves too seriously and being able to laugh. Kindness is also incredibly important. We are committed to creating and nurturing a diverse and inclusive environment. Flexibility : You'll be able to work from home. If you need to start early or start late to cater to your family or other needs, we don't mind, so long as you get your work done and proactively communicate. Financial benefits : We pay competitive market salaries and provide staff with generous paid time off and holiday schedules. Additionally, you'll be able to access our flexi benefits scheme for home office equipment or health and fitness expenditure. Certain staff at Sleek are also eligible for our employee share ownership plan and can share in the upside of our stellar growth trajectory as we work toward listing on a prominent stock exchange in the Asia Pacific region. Personal growth : You'll get a lot of responsibility and autonomy at Sleek we move at a fast pace so you'll be making decisions, making mistakes and learning. There's also a range of internal and external facing training programmes we run. We're also at the forefront of utilising AI in our space and are developing a regional centre of AI excellence. It is our intention that if you leave Sleek, you leave as a more well-rounded person and Process : The successful candidate will participate in the below interview stages. Introductory Call. A quick chat with the member of the HR Team. Take Home Assessment : A small take-home assessment to demonstrate your knowledge and skills in both frontend and backend development. Estimated time : 2 hours. Note : Assessment is unrelated so Sleek's business, and will not be used or distributed beyond the interview process. Case Study Interview A : 60 minute chat with the member of the team to validate the technical aptitude of the candidate. Career Deep Dive/Behavioral fit interview. A : 60 minute chat with the CTO of the company. Offer + reference calls. We'll make a non-binding offer verbally or over email, followed by a couple of short phone or video calls with references that you provide to us. Requirement for background screening. Please be aware that Sleek is a regulated entity and as such is required to perform different levels of background checks on staff depending on their role. This may include using external vendors to verify the below : Your education. Any criminal history. Any political exposure. Any bankruptcy or adverse credit history. We will ask for your consent before conducting these checks. Depending on your role at Sleek, an adverse result on one of these checks may prohibit you from passing probation. (ref:hirist.tech) Show more Show less
Posted 2 days ago
2.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Job We're Hiring : DevOps Engineer (2-5 Years Exp.) | Noida Location : Sector 158, Noida | On-site | Full-time Industry : Internet News | Media | Digital Are you a seasoned DevOps Engineer with 810 years of experience, ready to take ownership of large-scale infrastructure and cloud deployments? We're looking for a hands-on DevOps expert with strong experience in Google Cloud Platform (GCP) to lead our CI/CD pipelines, automate deployments, and manage a microservices-based infrastructure at scale. What Youll Do Own and manage CI/CD pipelines and infrastructure end-to-end. Architect and deploy scalable solutions on GCP (preferred). Streamline release cycles in coordination with QA, product, and engineering teams. Build containerized apps using Docker and manage them via Kubernetes. Use Terraform, Ansible, or equivalent tools for Infrastructure-as-Code (IAC). Monitor system performance and lead troubleshooting during production issues. Drive automation across infrastructure, monitoring, and alerts. Ensure microservices run securely and reliably. What Were Looking For 2 to 5 years of experience in DevOps or similar roles. Strong GCP (Google Cloud Platform) experience is mandatory. Hands-on with Docker, Kubernetes, Jenkins/GitLab CI/CD, Git/GitHub. Solid scripting knowledge (Shell, Python, etc. Familiarity with Node.js/React deployments. Experience with SQL/NoSQL DBs and tools like Elasticsearch, Spark, or Presto. Good understanding of secure development and InfoSec standards. Immediate joiners preferred (ref:hirist.tech) Show more Show less
Posted 2 days ago
2.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We're Hiring | Junior DevOps Engineer (2-5 Years Exp) Location : Noida | Immediate Joiners Preferred We are looking for a Junior DevOps Engineer with 2-5 years of experience to join our team and support our deployment processes and infrastructure. This role is ideal for someone with a solid foundation in cloud platforms, automation tools, and CI/CD pipelines, who is eager to grow their skills in a dynamic environment. You will work closely with senior engineers to ensure the scalability, security, and performance of our systems. Key Responsibilities Assist in the maintenance and improvement of CI/CD pipelines. Support the deployment and management of applications in cloud environments (preferably GCP). Collaborate with development, QA, and operations teams to facilitate smooth release cycles. Monitor system performance and assist in troubleshooting efforts. Help build and maintain containerized applications using Docker and orchestration with Kubernetes. Contribute to infrastructure-as-code initiatives using Terraform, Ansible, or similar tools. Assist in the deployment and scaling of applications. Support the automation of infrastructure, monitoring, and alerting systems. Participate in root cause analysis and resolution of production issues, under supervision. Required Skills & Qualifications 2 to 5 years of experience in DevOps or similar engineering roles. Solid understanding of application deployment and management in cloud environments. Familiarity with GCP, AWS, or Azure. Proficiency in Git and basic understanding of CI/CD tools (Jenkins, GitLab CI/CD). Basic knowledge of containerization (Docker) and orchestration (Kubernetes). Good scripting skills in Shell, Python, or similar. Familiarity with databases (SQL and NoSQL) is a plus. Understanding of secure development practices and information security standards. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Bachelor's degree in Computer Science or related field. Preferred: Immediate joiners and local candidates from Noida Industry Internet News Broadcast Media Production and Distribution (ref:hirist.tech) Show more Show less
Posted 2 days ago
2.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
DevOps Engineer (2 - 5 Years Exp.) | Noida Location : Sector 158, Noida | On-site Description : Are you a seasoned DevOps Engineer with 2-5 years of experience, ready to take ownership of large-scale infrastructure and cloud deployments? We're looking for a hands-on DevOps expert with strong experience in Google Cloud Platform (GCP) to lead our CI/CD pipelines, automate deployments, and manage a microservices-based infrastructure at scale. What Youll Do Own and manage CI/CD pipelines and infrastructure end-to-end. Architect and deploy scalable solutions on GCP (preferred). Streamline release cycles in coordination with QA, product, and engineering teams. Build containerized apps using Docker and manage them via Kubernetes. Use Terraform, Ansible, or equivalent tools for Infrastructure-as-Code (IAC). Monitor system performance and lead troubleshooting during production issues. Drive automation across infrastructure, monitoring, and alerts. Ensure microservices run securely and reliably. What Were Looking For 2 - 5 years of experience in DevOps or similar roles. Strong GCP (Google Cloud Platform) experience is mandatory. Hands-on with Docker, Kubernetes, Jenkins/GitLab CI/CD, Git/GitHub. Solid scripting knowledge (Shell, Python, etc. Familiarity with Node.js/React deployments. Experience with SQL/NoSQL DBs and tools like Elasticsearch, Spark, or Presto. Good understanding of secure development and InfoSec standards. Immediate joiners preferred (ref:hirist.tech) Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About The Role We are looking for a Data Analyst with a strong background in data visualization and business analytics. This role requires expertise in analyzing large datasets, creating insightful reports, and presenting data-driven findings using Adobe Analytics, Tableau, Power Query, and SQL Workbench. The ideal candidate will help drive business decisions by transforming raw data into meaningful insights through advanced reporting and visualization Responsibilities Analyze large datasets to extract meaningful insights and identify trends to support business decision-making. Develop interactive dashboards and reports using Tableau and Power Query for data visualization. Utilize Adobe Analytics to track and analyze digital performance, user behavior, and key business metrics. Write and optimize SQL queries in SQL Workbench to retrieve, manipulate, and process large volumes of data. Translate complex data findings into clear and actionable business insights for various stakeholders. Create compelling data visualizations and storytelling techniques to communicate findings effectively. Work with cross-functional teams to define business analytics needs, ensuring alignment with company objectives. Ensure data accuracy, integrity, and automation in reporting processes. Identify key KPIs and metrics to measure business performance and suggest data-driven strategies for & Qualifications 3+ years of experience in data analysis, business analytics, and data visualization. Expertise in Adobe Analytics for web traffic analysis and performance tracking. Strong hands-on experience with Tableau for data visualization and interactive reporting. Proficiency in Power Query for data transformation and automation. Advanced SQL skills using SQL Workbench to query and manipulate large datasets. Strong understanding of business analytics and KPI measurement for data-driven decision-making. Excellent data storytelling skills, with the ability to present insights to both technical and non-technical stakeholders. Strong problem-solving and critical-thinking skills in interpreting data trends. Experience working with cross-functional teams, marketing, product, and finance teams to support Data-driven Qualifications Experience with Python or R for advanced data analysis. Familiarity with Google Analytics or other digital analytics tools. Exposure to cloud platforms (AWS, GCP, or Azure) for data processing. (ref:hirist.tech) Show more Show less
Posted 2 days ago
68.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Tech Lead PHP (Laravel / Phalcon + Full Stack) Location : Ahmedabad. Experience : 68 Years. Job Description Were looking for a Tech Lead who will actively do R&D, code hands-on, mentor developers, and drive the technical direction of our product platform. If you're passionate about clean architecture, high performance, and building scalable PHP applications, lets connect!. What Youll Be Doing Writing clean, efficient, and well-architected PHP code (Laravel / Phalcon). Taking ownership of the technical roadmap and overall system architecture. Reviewing code, solving complex dev challenges, and mentoring team members. Collaborating with Product, QA, and DevOps teams for smooth, high-quality releases. Leading by example on best practices, performance tuning & architectural decisions. Tech Skills You Should Have Strong hands-on with PHP 8.x, Laravel 10+, and Phalcon. Proficient in MySQL / MariaDB, DB schema design, and query optimisation. Solid knowledge of REST APIS, WebSocket, JSON, and XML. Frontend experience with HTML5, CSS3, and modern JS frameworks (React.js, Vue.js, or Angular). Working knowledge of Git, Docker, Linux, CI/CD, and API tools like Swagger / OpenAPI. Familiar with Redis, RabbitMQ, or similar for asynchronous workflows. Bonus Skills Experience with microservices, cloud platforms (AWS / GCP), or multi-tenant SaaS platforms. (ref:hirist.tech) Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Gandhinagar, Gujarat, India
On-site
Job Summary We are looking for a highly skilled and visionary DevOps & AIOps Expert to lead the automation, monitoring, and intelligent operations of our IT infrastructure and software delivery pipelines. The ideal candidate will be responsible for designing and implementing DevOps practices alongside AIOps capabilities to improve reliability, reduce downtime, and enable proactive issue resolution using artificial intelligence and machine learning. This role requires a deep understanding of CI/CD, infrastructure automation, observability tools, and AI-driven Responsibilities : Responsibilities : Design, implement, and manage CI/CD pipelines using tools like Jenkins, GitLab CI, CircleCI, or Azure DevOps. Automate infrastructure provisioning and configuration using Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. Manage containerization and orchestration using Docker and Kubernetes. Maintain secure and scalable cloud environments (AWS, Azure, GCP) including compute, storage, and networking. Optimize build and deployment processes for performance, repeatability, and reliability. Implement and enforce DevOps best practices for code quality, release management, and version Responsibilities : Deploy and configure AIOps platforms (e.g., Moogsoft, BigPanda, Splunk, Dynatrace, Datadog, or IBM Watson AIOps). Implement machine learning-based anomaly detection and root cause analysis systems. Aggregate and analyze logs, metrics, and traces to derive operational insights and automate alert management. Integrate AI-driven automation for incident detection, diagnosis, and remediation. Collaborate with infrastructure, application, and security teams to build self-healing systems. Continuously evaluate emerging AIOps technologies to enhance operational Skills Required : Stack : : Jenkins, GitLab CI, Azure DevOps, : Terraform, Ansible, Chef, Puppet, : Docker, : Kubernetes, : Python, Bash, : Git, GitHub, : Prometheus, Grafana, ELK/EFK Platforms : : (EC2, S3, Lambda, : (VMs, App Services, Cloud Platform : (GKE, Tools : Moogsoft, BigPanda, Splunk ITSI, Dynatrace, AppDynamics, LogicMonitor, Datadog Log & Metric Analysis : ELK stack, Fluentd, Logstash, Frameworks : Python (Scikit-learn, TensorFlow), Jupyter Notebooks (for custom : Bachelors or Masters degree in Computer Science, Information Technology, or related field. 6+ years of experience in DevOps; 2+ years working with AIOps platforms. Strong knowledge of DevOps principles, site reliability engineering (SRE), and IT operations. Experience with monitoring, observability, and AI-driven automation. Strong analytical and problem-solving skills with a proactive mindset. Excellent communication and collaboration Certifications : AWS/Azure/GCP Certified DevOps Engineer Certified Kubernetes Administrator (CKA) Terraform Associate Certification ITIL / AIOps Foundation (e.g., from Moogsoft, Splunk) (ref:hirist.tech) Show more Show less
Posted 2 days ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title : ERPNext Engineer & Specialist Location : Ahmedabad, Gujarat About Us Momentum91 is a technology-driven company specializing in IT solutions, ERP implementations, and enterprise software development. We focus on helping businesses streamline operations, optimize workflows, and achieve digital transformation through robust and scalable ERP solutions. Position Overview We are looking for an experienced ERPNext Engineer to lead the implementation, customization, and integration of ERPNext solutions. The ideal candidate should have strong expertise in ERP module customization, business process automation, and API integrations while ensuring system scalability and performance. Key Responsibilities ERPNext Implementation & Customization : Design, configure, and deploy ERPNext solutions based on business requirements. Customize and develop new modules, workflows, and reports within ERPNext. Optimize system performance and ensure data integrity. Integration & Development Develop custom scripts using Python and Frappe Framework. Integrate ERPNext with third-party applications via REST API. Automate workflows, notifications, and business processes. Technical Support & Maintenance Troubleshoot ERP-related issues and provide ongoing support. Upgrade and maintain ERPNext versions with minimal downtime. Ensure security, scalability, and compliance in ERP & Documentation : Work closely with stakeholders to understand business needs and translate them into ERP solutions. Document ERP configurations, custom scripts, and best & Skills : 2+ years of experience working with ERPNext and the Frappe Framework. Strong proficiency in Python, JavaScript, and SQL. Hands-on experience with ERPNext customization, report development, and API integrations. Knowledge of Linux, Docker, and cloud platforms (AWS, GCP, or Azure) is a plus. Experience in business process automation and workflow optimization. Familiarity with version control (Git) and Agile development : Competitive salary. Opportunity to lead a transformative ERP project for a mid-market client. Professional development opportunities. Fun and inclusive company culture. Five-day workweek. (ref:hirist.tech) Show more Show less
Posted 2 days ago
0 years
0 Lacs
Sahibzada Ajit Singh Nagar, Punjab, India
On-site
Role : Senior SQL Cloud Database Administrator : Managing, optimizing, and securing our cloud-based SQL databases, ensuring high availability and performance. Design and implement scalable and secure SQL database structures in AWS and GCP environments. Plan and execute data migration from on-premises or legacy systems to AWS and GCP cloud platforms. Monitor database performance, identify bottlenecks, and fine-tune queries and indexes for optimal efficiency. Implement and manage database security protocols, including encryption, access controls, and compliance with regulations. Develop and maintain robust backup and recovery strategies to ensure data integrity and availability. Perform regular maintenance tasks such as patching, updates, and troubleshooting database issues. Work closely with developers, DevOps, and data engineers to support application development and deployment. Ensure data quality, consistency, and governance across distributed systems. Keep up with emerging technologies, cloud services, and best practices in database Skills : Proven experience as a SQL Database Administrator with expertise in AWS and GCP cloud platforms. Strong knowledge of SQL database design, implementation, and optimization. Experience with data migration to cloud environments. Proficiency in performance monitoring and query optimization. Knowledge of database security protocols and compliance regulations. Familiarity with backup and disaster recovery strategies. Excellent troubleshooting and problem-solving skills. Strong collaboration and communication skills. Knowledge of DevOps integration. (ref:hirist.tech) Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Title : Full stack : Ahmedabad, Us : At Momentum91, We Specialize In Providing Scalable, High-performance IT And Software Development Services. Our Expertise Covers Various Industries, Delivering Tailored Solutions For Mid-market And Enterprise Clients. With a Proven Track Record In Ideating, Building, And Scaling SaaS Products And IT Systems, Including Advanced ERP Solutions, We Help Businesses Achieve Operational Excellence And Accelerated Overview We are seeking an experienced and passionate Full Stack Engineer with over 5+ years of hands-on experience in building scalable web applications using React.js on the frontend and Node.js on the backend. Responsibilities The ideal candidate will be responsible for end-to-end development, contributing to both architectural decisions and code implementation in a fast-paced, agile Responsibilities : Design, develop, and maintain scalable web applications using React.js and Node.js. Collaborate with UI/UX designers, product managers, and other developers to deliver high-quality features. Implement RESTful APIs and integrate third-party services. Write clean, maintainable, and testable code for both front-end and back-end components. Conduct code reviews, optimize application performance, and ensure code quality standards. Participate in the full software development lifecycle, including planning, development, testing, deployment, and maintenance. Troubleshoot and debug complex technical issues across the stack. Keep up-to-date with emerging technologies and Requirements : Strong proficiency in JavaScript (ES6+) and TypeScript. In-depth knowledge of React.js (hooks, context API, component lifecycle, etc.). Experience with Node.js and Express.js for server-side development. Proficient with database technologies such as MongoDB, PostgreSQL, or MySQL. Experience with RESTful API design and implementation. Familiarity with version control tools like Git. Understanding of CI/CD pipelines and cloud services (AWS, Azure, or GCP). Knowledge of unit, integration, and end-to-end testing (Jest, Mocha, Cypress, etc.). Familiarity with containerization tools like Qualifications : Bachelors degree in Computer Science, Engineering, or a related field. 5+ years of professional experience as a Full Stack Developer. Proven track record of delivering web applications using React.js and Node.js. Strong problem-solving skills and ability to work independently or in a team. Experience with state management tools like Redux, MobX, or Context API Benefits Competitive Salary Fun, inclusive, and vibrant company culture Five-day workweek Professional development opportunities Opportunity to be part of a rapidly growing company (ref:hirist.tech) Show more Show less
Posted 2 days ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description We're looking for a Software Engineer to join our team. In this role, you'll build integrations between different cybersecurity platforms and third-party systems. You'll use both specialized low-code/no-code tools for quick development and Python scripting for more complex needs. Your work will directly support our security analysts by ensuring they have the correct data and automation to detect and respond to threats : Research and Evaluate APIs : Research and evaluate APIs from third-party platforms (e. g., SIEMs, threat intelligence providers, logging tools, etc. ) to identify the most relevant integrations for our Security Operations Center (SOC) Analysts. Design and Develop Integrations : Design, develop, and deploy secure and scalable API integrations that bring real-time data and insights into the Metron Security ecosystem. API Interaction and Data Management : Make robust API calls to third-party platforms to extract existing data and generate new data or actions on those platforms. API Protocol Expertise : Work with RESTful APIs and OpenAPI/Swagger specification to define and integrate APIs efficiently. Custom Scripting : Utilize Python scripting for complex data transformations, custom business logic, and advanced automation. Collaboration : Collaborate closely with Security Analysts and Backend Engineers to thoroughly understand SOC workflows and deliver features that directly enhance threat visibility and response capabilities. Command-Line Proficiency : Leverage the command-line interface (CLI) for API testing, automation, deployment, and debugging tasks (e. g., using curl, httpie, jq, and Bash scripting). Code Quality : Write clean, maintainable, and well-documented code, adhering to best security and software development practices. Requirements API Expertise : Strong analytical and problem-solving skills with a proven ability to evaluate third-party APIs, understand their utility, and map them to security operations needs. API Protocols : Hands-on experience with API protocols and specifications, including REST and OpenAPI/Swagger. Programming Proficiency : Proficiency in Python for scripting, custom logic, and advanced automation, with comfort in other languages like Go, Java, or Node.js a plus. Authentication and Authorization : Practical experience with various authentication and authorization mechanisms : Basic Auth, OAuth 2.0 (including different flows), JWT, and API Keys. Command-Line Tools : Comfortable and experienced working with CLI tools such as curl, httpie, jq, and scripting environments like Bash for API interaction and debugging. Version Control : Familiarity with version control systems (e. g., Git) and collaborative development workflows. Problem-Solving : Excellent analytical and problem-solving skills with a logical approach to integration challenges. Communication : Strong verbal and written communication skills to articulate technical concepts to both technical and non-technical audiences. Nice-to-Have Skills Experience in building integrations with specific cybersecurity platforms, such as EDRs, SIEMs, SOARs, and Vulnerability Management tools. Knowledge of data modeling and data transformation, along with their best practices. Understanding of cloud platforms (AWS, Azure, GCP) and their API ecosystems. Experience with continuous integration/continuous deployment (CI/CD) pipelines. (ref:hirist.tech) Show more Show less
Posted 2 days ago
58.0 years
0 Lacs
Greater Lucknow Area
On-site
Job Description We are seeking a high-impact AI/ML Engineer to lead the design, development, and deployment of machine learning and AI solutions across vision, audio, and language modalities. You'll be part of a fast-paced, outcome-oriented AI & Analytics team, working alongside data scientists, engineers, and product leaders to transform business use cases into real-time, scalable AI systems. This role demands strong technical leadership, a product mindset, and hands-on expertise in Computer Vision, Audio Intelligence, and Deep Learning. Key Responsibilities Architect, develop, and deploy ML models for multimodal problems, including vision (image/video), audio (speech/sound), and NLP tasks. Own the complete ML lifecycle : data ingestion, model development, experimentation, evaluation, deployment, and monitoring. Leverage transfer learning, foundation models, or self-supervised approaches where suitable. Design and implement scalable training pipelines and inference APIs using frameworks like PyTorch or TensorFlow. Collaborate with MLOps, data engineering, and DevOps to productionize models using Docker, Kubernetes, or serverless infrastructure. Continuously monitor model performance and implement retraining workflows to ensure accuracy over time. Stay ahead of the curve on cutting-edge AI research (e.g., generative AI, video understanding, audio embeddings) and incorporate innovations into production systems. Write clean, well-documented, and reusable code to support agile experimentation and long-term platform : Bachelors or Masters degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 58+ years of experience in AI/ML Engineering, with at least 3 years in applied deep learning. Technical Skills Languages : Expert in Python; good knowledge of R or Java is a plus. ML/DL Frameworks : Proficient with PyTorch, TensorFlow, Scikit-learn, ONNX. Computer Vision : Image classification, object detection, OCR, segmentation, tracking (YOLO, Detectron2, OpenCV, MediaPipe). Audio AI : Speech recognition (ASR), sound classification, audio embedding models (Wav2Vec2, Whisper, etc.). Data Engineering : Strong with Pandas, NumPy, SQL, and preprocessing pipelines for structured and unstructured data. NLP/LLMs : Working knowledge of Transformers, BERT/LLAMA, Hugging Face ecosystem is preferred. Cloud & MLOps : Experience with AWS/GCP/Azure, MLFlow, SageMaker, Vertex AI, or Azure ML. Deployment & Infrastructure : Experience with Docker, Kubernetes, REST APIs, serverless ML inference. CI/CD & Version Control : Git, DVC, ML pipelines, Jenkins, Airflow, etc. Soft Skills & Competencies Strong analytical and systems thinking; able to break down business problems into ML components. Excellent communication skills able to explain models, results, and decisions to non-technical stakeholders. Proven ability to work cross-functionally with designers, engineers, product managers, and analysts. Demonstrated bias for action, rapid experimentation, and iterative delivery of impact. (ref:hirist.tech) Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Greater Lucknow Area
Remote
Role : NestJS Backend Engineer Location : Remote, India What's In It For You? Make a Real Impact : Play a crucial role in building a cutting-edge, unified platform for a leading global financial institution. Your work will directly enhance user experiences and drive business growth on an international scale. Modern Tech Stack : Dive deep into a modern technology environment, working extensively with NestJS to build scalable and robust backend services and Experience APIs. You'll also gain exposure to Flutter, Salesforce, and cloud platforms. Career Growth : This is a high-visibility project offering significant opportunities to expand your skills in enterprise-grade application development, API design, and complex system integrations. Collaborative & Agile Environment : Become part of a dynamic, high-performing Agile team. Contribute to a culture of innovation and continuous improvement, working alongside talented engineers and product specialists. Build for Scale : Develop backend solutions designed to support hundreds of thousands of users and millions of transactions, contributing to a platform with a significant global reach. Your Role As a Backend Engineer at Aviato Consulting, you will be a key contributor to a transformative project. You'll focus on designing, developing, and deploying high-quality backend services using NestJS. Key Responsibilities Develop and maintain scalable, secure, and performant backend services and APIs using NestJS and TypeScript. Collaborate within an Agile squad to deliver features for a core workstream of a large-scale platform. Design and implement robust APIs for integration with frontend applications (Flutter) and other backend systems (including Salesforce and enterprise domain services). Write clean, well-documented, and thoroughly tested code (unit and contract testing). Participate actively in code reviews, sprint planning, and other Agile ceremonies. Troubleshoot and resolve backend issues, ensuring high system reliability and performance. Ensure solutions align with enterprise architecture, security standards, and project guidelines. What We're Looking For Bachelor's degree in Computer Science, Software Engineering, or a related field. Minimum 5 years of backend development experience, with strong, hands-on expertise in NestJS and TypeScript/JavaScript. Proven experience in designing, building, and consuming RESTful APIs. Solid understanding of microservices architecture, asynchronous programming, and database technologies (e.g., MongoDB). Familiarity with Agile methodologies and CI/CD practices (e.g., Git, GitHub Actions). Strong problem-solving skills and a passion for writing high-quality code. Excellent communication and teamwork abilities. Experience with cloud platforms (e.g., GCP, AWS) is a plus. Experience in the financial services or insurance industry is beneficial but not mandatory. (ref:hirist.tech) Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Job Description We are looking for an experienced Data Scientist / AI Developer with a strong foundation in classical machine learning, deep learning, natural language processing (NLP), and generative AI. You will be responsible for designing and implementing AI models, including fine-tuning large language models (LLMs), and developing innovative solutions to solve complex problems in a variety of domains. Key Responsibilities : . Develop and implement machine learning models and deep learning algorithms for various use cases. Work on NLP projects involving text classification, language modelling, entity recognition, and sentiment analysis. Leverage generative AI techniques to create innovative solutions and models for content generation, summarization, and translation tasks. Fine-tune large language models (LLMs) to optimize performance for specific tasks or applications. Collaborate with cross-functional teams to design AI-driven solutions that address business problems. Analyse large-scale datasets, perform data pre-processing, feature engineering, and model evaluation. Stay updated with the latest advancements in AI, ML, NLP, and LLMs to continuously improve models and methodologies. Present findings and insights to stakeholders in a clear and actionable manner. Build and maintain end-to-end machine learning pipelines for scalable deployment. Required Skills Strong expertise in supervised and unsupervised machine learning techniques. Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or Keras. Solid experience in Natural Language Processing (NLP), including tokenization, embeddings, and sequence modelling. Hands-on experience with generative AI models and their practical applications. Proven ability to fine-tune large language models (LLMs) for specific tasks. Strong programming skills in Python and familiarity with libraries like Scikit-learn, NumPy, and pandas. Experience in handling large datasets and working with databases (SQL, NoSQL). Familiarity with cloud platforms (AWS, Azure, or GCP) and containerization tools (Docker, Kubernetes). Deep expertise in computer vision, including techniques for object detection, image segmentation, image classification, and feature extraction. Strong problem-solving skills, analytical thinking, and attention to detail. Preferred Skills Proven experience in fine-tuning LLMs (like llama series, mistral) for specific tasks and optimizing their performance. Expertise in computer vision techniques, including object detection, image segmentation, and classification. Proficiency with YOLO algorithms and other state-of-the-art computer vision models. Hands-on experience in building and deploying models in real-time applications or production environments. Qualifications 3+ years of relevant experience in AI, ML, NLP, or related fields. Bachelors or Masters degree in Computer Science, Statistics, or a related discipline. (ref:hirist.tech) Show more Show less
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description At Blend, we are award-winning experts who transform businesses by delivering valuable insights that make a difference. From crafting a data strategy that focuses resources on what will make the biggest difference to your company, to standing up infrastructure, and turning raw data into value through data science and visualization: we do it all. We believe that data that doesn't drive value is lost opportunity, and we are passionate about helping our clients drive better outcome through applied analytics. We are obsessed with delivering world class solutions to our customers through our network of industry leading partners. If this sounds like your kind of challenge, we would love to hear from you. For more information, visit www.blend360.com Job Description We are looking for someone who is ready for the next step in their career and is excited by the idea of solving problems and designing best in class. However, they also need to be aware of the practicalities of making a difference in the real world – whilst we love innovative advanced solutions, we also believe that sometimes a simple solution can have the most impact. Our AI Engineer is someone who feels the most comfortable around solving problems, answering questions and proposing solutions. We place a high value on the ability to communicate and translate complex analytical thinking into non-technical and commercially oriented concepts, and experience working on difficult projects and/or with demanding stakeholders is always appreciated. What can you expect from the role? Contribute to design, develop, deploy and maintain AI solutions Use a variety of AI Engineering tools and methods to deliver Own parts of projects end-to-end Contributing to solutions design and proposal submissions Supporting the development of the AI engineering team within Blend Maintain in-depth knowledge of the AI ecosystems and trends Mentor junior colleagues Qualifications Contribute to the design, development, testing, deployment, maintenance, and improvement of robust, scalable, and reliable software systems, adhering to best practices. Apply Python programming skills for both software development and AI/ML tasks. Utilize analytical and problem-solving skills to debug complex software, infrastructure, and AI integration issues. Proficiently use version control systems, especially Git and ML/LLMOps model versioning protocols. Assist in analysing complex or ambiguous AI problems, breaking them down into manageable tasks, and contributing to conceptual solution design within the rapidly evolving field of generative AI. Work effectively within a standard software development lifecycle (e.g., Agile, Scrum). Contribute to the design and utilization of scalable systems using cloud services (AWS, Azure, GCP), including compute, storage, and ML/AI services. (Preferred: Azure) Participate in designing and building scalable and reliable infrastructure to support AI inference workloads, including implementing APIs, microservices, and orchestration layers. Contribute to the design, building, or working with event-driven architectures and relevant technologies (e.g., Kafka, RabbitMQ, cloud event services) for asynchronous processing and system integration. Experience with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes, Airflow, Kubeflow, Databricks Jobs, etc). Assist in implementing CI/CD pipelines and optionally using IaC principles/tools for deploying and managing infrastructure and ML/LLM models. Contribute to developing and deploying LLM-powered features into production systems, translating experimental outputs into robust services with clear APIs. Demonstrate familiarity with transformer model architectures and a practical understanding of LLM specifics like context handling. Assist in designing, implementing, and optimising prompt strategies (e.g., chaining, templates, dynamic inputs); practical understanding of output post-processing. Experience integrating with third-party LLM providers, managing API usage, rate limits, token efficiency, and applying best practices for versioning, retries, and failover. Contribute to coordinating multi-step AI workflows, potentially involving multiple models or services, and optimising for latency and cost (sequential vs. parallel execution). Assist in monitoring, evaluating, and optimising AI/LLM solutions for performance (latency, throughput, reliability), accuracy, and cost in production environments. Additional Information Experience specifically with the Databricks MLOps platform. Familiarity with fine-tuning classical LLM models. Experience ensuring security and observability for AI services. Contribution to relevant open-source projects. Familiarity with building agentic GenAI modules or systems. Have hands-on experience implementing and automating MLOps/LLMOps practices, including model tracking, versioning, deployment, monitoring (latency, cost, throughput, reliability), logging, and retraining workflows. Experience working with MLOps/experiment tracking and operational tools (e.g., MLflow, Weights & Biases). Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Working with Us Challenging. Meaningful. Life-changing. Those aren't words that are usually associated with a job. But working at Bristol Myers Squibb is anything but usual. Here, uniquely interesting work happens every day, in every department. From optimizing a production line to the latest breakthroughs in cell therapy, this is work that transforms the lives of patients, and the careers of those who do it. You'll get the chance to grow and thrive through opportunities uncommon in scale and scope, alongside high-achieving teams. Take your career farther than you thought possible. Bristol Myers Squibb recognizes the importance of balance and flexibility in our work environment. We offer a wide variety of competitive benefits, services and programs that provide our employees with the resources to pursue their goals, both at work and in their personal lives. Read more careers.bms.com/working-with-us . Key Responsibilities Accountable for the development and support high-quality software products used by Pharmaceutical Development and Global supplies within GPS. Collaborate with Business and IT Plan functions to develop solutions for business problems including defining business requirements, provide project timelines and budget, develop acceptance criteria, testing, training, and change management plans. Contributes heavily towards developing a plan for development activities and translates those into actionable projects. These include releases, required modifications and discretionary enhancements to support the application life cycle. Collaborate directly with the business clients, IT Business Partners, and other IT functions on delivery of digital capabilities. Take complete ownership of releases from design till deployment and successful production run. This includes coordination with Ops team to deploy applications to various environments. Qualifications & Experience Have a strong commitment to a career in technology with a passion for healthcare. Strong communication still, ability to understand the needs of the business and commitment to deliver the best user experience and adoption. 3+ years of software development experience in full SDLC process, involving Analysis, Design, Development, Testing, and production. Proven experience as a full stack developer or similar role Learn, design, and implement new technologies. Have experience leading and mentoring small teams of highly skilled technical developers. Experience in designing and implementation of business-critical applications within AWS ecosystem. Experience in Cloud platforms such as AWS, Azure, GCP. Experience as a Python/node.js developer Strong knowledge of front-end technologies such as HTML, CSS, JavaScript, and React.js Strong knowledge of relational database technologies such as MySQL, SQL Server, Oracle, and PostgreSQL Strong knowledge of NoSQL databases, such as MongoDB, Amazon DynamoDB, and Cassandra Knowledge of source code repositories like SVN, GitHub, Bitbucket. Knowledge of design and implementation of N- Tier application in both cloud and on-prem environments. Ideal candidates would also have Have a strong commitment to a career in technology with a passion for healthcare. May lead initiatives related to continuous improvement or implementation of new technologies. Works independently on most deliverables. Participates in decision making and brings a variety of strong views and perspective to achieve team objectives. Knowledge of Software Development Lifecycle (SDLC) and computer systems validation (CSV) Ability to quickly learn new technologies and incorporate them into a solution. Knowledge of Project Management skills, and experience with agile and scrum methodologies. Able to collaborate across multiple functional teams. #HYDIT If you come across a role that intrigues you but doesn't perfectly line up with your resume, we encourage you to apply anyway. You could be one step away from work that will transform your life and career. Uniquely Interesting Work, Life-changing Careers With a single vision as inspiring as Transforming patients' lives through science™ , every BMS employee plays an integral role in work that goes far beyond ordinary. Each of us is empowered to apply our individual talents and unique perspectives in a supportive culture, promoting global participation in clinical trials, while our shared values of passion, innovation, urgency, accountability, inclusion and integrity bring out the highest potential of each of our colleagues. On-site Protocol BMS has an occupancy structure that determines where an employee is required to conduct their work. This structure includes site-essential, site-by-design, field-based and remote-by-design jobs. The occupancy type that you are assigned is determined by the nature and responsibilities of your role Site-essential roles require 100% of shifts onsite at your assigned facility. Site-by-design roles may be eligible for a hybrid work model with at least 50% onsite at your assigned facility. For these roles, onsite presence is considered an essential job function and is critical to collaboration, innovation, productivity, and a positive Company culture. For field-based and remote-by-design roles the ability to physically travel to visit customers, patients or business partners and to attend meetings on behalf of BMS as directed is an essential job function. BMS is dedicated to ensuring that people with disabilities can excel through a transparent recruitment process, reasonable workplace accommodations/adjustments and ongoing support in their roles. Applicants can request a reasonable workplace accommodation/adjustment prior to accepting a job offer. If you require reasonable accommodations/adjustments in completing this application, or in any part of the recruitment process, direct your inquiries to adastaffingsupport@bms.com . Visit careers.bms.com/ eeo -accessibility to access our complete Equal Employment Opportunity statement. BMS cares about your well-being and the well-being of our staff, customers, patients, and communities. As a result, the Company strongly recommends that all employees be fully vaccinated for Covid-19 and keep up to date with Covid-19 boosters. BMS will consider for employment qualified applicants with arrest and conviction records, pursuant to applicable laws in your area. If you live in or expect to work from Los Angeles County if hired for this position, please visit this page for important additional information https //careers.bms.com/california-residents/ Any data processed in connection with role applications will be treated in accordance with applicable data privacy policies and regulations. Show more Show less
Posted 2 days ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About the company: Avenue Code is the leading software consultancy focused on delivering end-to-end development solutions for digital transformation across every vertical. We’re privately held, profitable, and have been on a solid growth trajectory since day one. We care deeply about our clients, our partners, and our people. We prefer the word ‘partner’ over ‘vendor’, and our investment in professional relationships is a reflection of that philosophy. We pride ourselves on our technical acumen, our collaborative problem-solving ability, and the warm professionalism of our teams. Avenue Code has been believing in and promoting plurality actions for over 10 years, understanding that recognizing differences and fostering a safe environment, employment opportunities, representation, and support are the best ways to promote an increasingly equitable culture. About the opportunity: Onsite Position at HDC – India Hyderabad, Telangana This is not a hybrid role – candidates are expected to work from the office 4 days a week. Responsibilities: 8+ years software development experience with high volume e-commerce or online retail services, 5 years of which are specific to front-end and integration technologies Demonstrable proficiency and experience in NodeJS-based technologies and/or Java, microservices and integration technologies like Kafka Exposure to API management (via Apigee or Mulesoft), Identity and Access Management technologies (like Ping Federate, OAuth and OpenID Connect) Experience with running workloads on Public Clouds such as AWS, Azure or GCP, and experience with container-based technologies like Docker and Cloud Foundry Prior experience working with Continuous Integration and Deployment in a DevOps oriented product development environment and familiarity with modern MML technologies like Splunk, New Relic and Pager Duty Well versed with system and technical design principles, performant coding practices ensuring security requirements are not compromised for functionality and/or performance. Experience with laying out a go-live plan at the conceptual stage, analyzing the pros and cons between multiple options. Experience with Content Management Systems and/or Personalization Systems is a major plus. Experience with Digital asset Management systems and personalization Systems is a major plus. SKILLS: – Knowledge of Oracle/ Microsoft Dynamics – Programming skills (e.g., Java, C#, Python) – Experience with enterprise application integration (EAI) – Knowledge of business processes and workflows – Effective communication and teamwork skills Avenue Code reinforces its commitment to privacy and to all the principles guaranteed by the most accurate global data protection laws, such as GDPR, LGPD, CCPA and CPRA. The Candidate data shared with Avenue Code will be kept confidential and will not be transmitted to disinterested third parties, nor will it be used for purposes other than the application for open positions. As a Consultancy company, Avenue Code may share your information with its clients and other Companies from the CompassUol Group to which Avenue Code’s consultants are allocated to perform its services. Show more Show less
Posted 2 days ago
14.0 years
0 Lacs
India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 14+ Years. Hands-on architecture experience using Camunda BPM. Deep expertise in BPMN, DMN, and CMMN standards and practical implementation. Strong programming skills in Java, Spring Boot, and building RESTful APIs. experience in business process automation or enterprise application development. Proven experience with microservices architecture and system integration patterns. Solid understanding of DevOps practices including CI/CD pipelines, logging, monitoring, and automated testing. Excellent analytical, communication, presentation, and stakeholder management skills. Hands-on experience with Camunda 8 projects. Ability to explain complex concepts clearly to both technical and non-technical stakeholders. Willingness to travel for client meetings, workshops, architecture reviews, or PoC demonstrations. Experience working in Agile environments. Exposure to cloud platforms (AWS, Azure, GCP) and containerized environments (Docker/Kubernetes). Familiarity with other BPM tools or workflow engines is a plus. Excellent communication and collaboration skills, with the ability to engage effectively with senior stakeholders. Familiarity with pre-sales activities such as RFPs/RFIs, technical points of view, and proof-of-concepts (PoCs). Willingness to travel for short-duration assignments, such as client meetings, workshops, and technical discussions. Knowledge of other BPM solutions like Pega or Appian and experience with RPA tools such as UiPath or Automation Anywhere. RESPONSIBILITIES: Analyze complex business processes and workflows to identify opportunities for automation using the Camunda BPM platform. Define and drive end-to-end technical architecture and design for scalable Camunda-based applications. Lead requirement analysis and engage with stakeholders to ensure solutions align with business needs and timelines. Design, develop, test, deploy, monitor, and optimize Camunda workflows, decision models (DMN), and case models (CMMN). Integrate Camunda solutions with external systems via APIs and ensure robust, secure interactions. Apply DevOps practices (CI/CD pipelines, monitoring, testing, etc.) to support Camunda application lifecycle. Mentor and guide developers and engineers on Camunda best practices and solution design. Engage in pre-sales activities such as RFP/RFI responses, technical solutioning, estimates, and proof-of-concepts (PoCs). Contribute to organizational thought leadership by writing blogs, creating whitepapers, delivering webinars, etc. Build and maintain proprietary utilities, reusable components, and accelerators to boost delivery efficiency. Stay current with latest Camunda platform features and BPM trends; evangelize within technical communities. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for Google Cloud Platform (GCP) professionals in India is rapidly growing as more and more companies are moving towards cloud-based solutions. GCP offers a wide range of services and tools that help businesses in managing their infrastructure, data, and applications in the cloud. This has created a high demand for skilled professionals who can work with GCP effectively.
The average salary range for GCP professionals in India varies based on experience and job role. Entry-level positions can expect a salary range of INR 5-8 lakhs per annum, while experienced professionals can earn anywhere from INR 12-25 lakhs per annum.
Typically, a career in GCP progresses from a Junior Developer to a Senior Developer, then to a Tech Lead position. As professionals gain more experience and expertise in GCP, they can move into roles such as Cloud Architect, Cloud Consultant, or Cloud Engineer.
In addition to GCP, professionals in this field are often expected to have skills in: - Cloud computing concepts - Programming languages such as Python, Java, or Go - DevOps tools and practices - Networking and security concepts - Data analytics and machine learning
As the demand for GCP professionals continues to rise in India, now is the perfect time to upskill and pursue a career in this field. By mastering GCP and related skills, you can unlock numerous opportunities and build a successful career in cloud computing. Prepare well, showcase your expertise confidently, and land your dream job in the thriving GCP job market in India.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.