Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
20 - 35 Lacs
Kolkata
Work from Office
Experience-8+ Years Job Location-Kolkata Notice Perod-30 Days Job Titile-Technical Project Manager Job Summary: We are looking for a skilled Technical Project Manager with a strong background in the IT services industry to manage and deliver client-facing technology projects. The ideal candidate will have a solid understanding of service delivery models, strong stakeholder management skills, and hands-on experience in managing technical teams across multiple client engagements. Key Responsibilities: Manage end-to-end project lifecycle for client engagements, ensuring timely and high-quality delivery. Interact with clients to gather requirements, define project scope, and set expectations. Collaborate with cross-functional teams including developers, QA, architects, and support teams to drive project execution. Track project milestones, manage risks and issues, and communicate status updates to internal and external stakeholders. Ensure adherence to service-level agreements (SLAs), budgets, and compliance requirements. • Lead Agile/Scrum teams and ensure alignment between client expectations and delivery outcomes. Maintain project documentation, resource planning, and reporting throughout the project lifecycle. Required Skills & Qualifications: Bachelors degree in Computer Science, Engineering, or a related field. 8+ years of experience in the IT services industry, with at least 3-5 years in a project management role. Strong knowledge of SDLC, Agile/Scrum, and Waterfall methodologies. Hands-on experience in managing projects involving Java, .NET, cloud platforms, or DevOps tools is a plus. Excellent communication, stakeholder management, and leadership skills. Proficiency in tools such as Jira, MS Project, ServiceNow, or similar platforms. Preferred Qualifications: • PMP, PRINCE2, or Scrum Master Certification. • Experience in managing offshore-onsite delivery models. • Exposure to client environments across BFSI, Retail, Healthcare, or Manufacturing domains. • Working knowledge of ITIL practices is a plus.
Posted 1 month ago
10.0 - 15.0 years
30 - 40 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities As an Architect, you will work to solve some of the most complex and captivating data management problems that would enable them as a data-driven organization; Seamlessly switch between roles of an Individual Contributor, team member, and Data Modeling Architect as demanded by each project to define, design, and deliver actionable insights. On a typical day, you might Engage the clients & understand the business requirements to translate those into data models. Analyze customer problems, propose solutions from a data structural perspective, and estimate and deliver proposed solutions. Create and maintain a Logical Data Model (LDM) and Physical Data Model (PDM) by applying best practices to provide business insights. Use the Data Modelling tool to create appropriate data models Create and maintain the Source to Target Data Mapping document that includes documentation of all entities, attributes, data relationships, primary and foreign key structures, allowed values, codes, business rules, glossary terms, etc. Gather and publish Data Dictionaries. Ideate, design, and guide the teams in building automations and accelerators Involve in maintaining data models as well as capturing data models from existing databases and recording descriptive information. Contribute to building data warehouse & data marts (on Cloud) while performing data profiling and quality analysis. Use version control to maintain versions of data models. Collaborate with Data Engineers to design and develop data extraction and integration code modules. Partner with the data engineers & testing practitioners to strategize ingestion logic, consumption patterns & testing. Ideate to design & develop the next-gen data platform by collaborating with cross-functional stakeholders. Work with the client to define, establish and implement the right modelling approach as per the requirement Help define the standards and best practices Involve in monitoring the project progress to keep the leadership teams informed on the milestones, impediments, etc. Coach team members, and review code artifacts. Contribute to proposals and RFPs Preferred candidate profile 10+ years of experience in Data space. Decent SQL knowledge Able to suggest modeling approaches for a given problem. Significant experience in one or more RDBMS (Oracle, DB2, and SQL Server) Real-time experience working in OLAP & OLTP database models (Dimensional models). Comprehensive understanding of Star schema, Snowflake schema, and Data Vault Modelling. Also, on any ETL tool, Data Governance, and Data quality. Eye to analyze data & comfortable with following agile methodology. Adept understanding of any of the cloud services is preferred (Azure, AWS & GCP) Enthuse to coach team members & collaborate with various stakeholders across the organization and take complete ownership of deliverables. Experience in contributing to proposals and RFPs Good experience in stakeholder management Decent communication and experience in leading the team You are important to us, lets stay connected! Every individual comes with a different set of skills and qualities so even if you dont tick all the boxes for the role today, we urge you to apply as there might be a suitable/unique role for you tomorrow. We are an equal-opportunity employer. Our diverse and inclusive culture and values guide us to listen, trust, respect, and encourage people to grow the way they desire. Note: The designation will be commensurate with expertise and experience. Compensation packages are among the best in the industry.
Posted 1 month ago
6.0 - 9.0 years
13 - 18 Lacs
Bengaluru
Work from Office
Job TitleLead Engineer – CI CD Devops LocationBengaluru Work EmploymentFull time DepartmentWireline DomainSoftware Reporting toGroup Engineer Tejas Networks is a global broadband, optical and wireless networking company, with a focus on technology, innovation and R&D. We design and manufacture high-performance wireline and wireless networking products for telecommunications service providers, internet service providers, utilities, defence and government entities in over 75 countries. Tejas has an extensive portfolio of leading-edge telecom products for building end-to-end telecom networks based on the latest technologies and global standards with IPR ownership. We are a part of the Tata Group, with Panatone Finvest Ltd. (a subsidiary of Tata Sons Pvt. Ltd.) being the majority shareholder. Tejas has a rich portfolio of patents and has shipped more than 900,000 systems across the globe with an uptime of 99.999%. Our product portfolio encompasses wireless technologies (4G/5G based on 3GPP and O-RAN standards), fiber broadband (GPON/XGS-PON), carrier-grade optical transmission (DWDM/OTN), packet switching and routing (Ethernet, PTN, IP/MPLS) and Direct-to-Mobile and Satellite-IoT communication platforms. Our unified network management suite simplifies network deployments and service implementation across all our products with advanced capabilities for predictive fault detection and resolution. As an R&D-driven company, we recognize that human intelligence is a core asset that drives the organization’s long-term success. Over 60% of our employees are in R&D, we are reshaping telecom networks, one innovation at a time. Why join Tejas We are on a journey to connect the world with some of the most innovative products and solutions in the wireless and wireline optical networking domains. Would you like to be part of this journey and do something truly meaningfulChallenge yourself by working in Tejas’ fast-paced, autonomous learning environment and see your output and contributions become a part of live products worldwide. At Tejas, you will have the unique opportunity to work with cutting-edge technologies, alongside some of the industry’s brightest minds. From 5G to DWDM/ OTN, Switching and Routing, we work on technologies and solutions that create a connected society. Our solutions power over 500 networks across 75+ countries worldwide, and we’re constantly pushing boundaries to achieve more. If you thrive on taking ownership, have a passion for learning and enjoy challenging the status quo, we want to hear from you! Who we are: In the dynamic world of enterprise technology, the shift towards cloud-native solutions is not just a trend but a necessity. As we embark on developing a state-of-the-art Network Management System (NMS) and Reporting tool, our goal is to leverage the latest technologies to create a robust, scalable, and efficient solution. This initiative is crucial for ensuring our network’s optimal performance, security, and reliability while providing insightful analytics through advanced reporting capabilities. Our project aims to design and implement a cloud-native NMS and reporting tool that will revolutionize how we manage and monitor our network infrastructure. By utilizing cutting-edge technologies, we will ensure that our solution is not only future-proof but also capable of adapting to the ever-evolving demands of our enterprise environment. What you work Develop and implement automation strategies for software build, deployment, and infrastructure management. Design and maintain CI/CD pipelines to enable frequent and reliable software releases. Collaborate with development, QA, and operations teams to optimize workflows and enhance software quality. Automate repetitive tasks and processes to improve efficiency and reduce manual intervention. Monitor and troubleshoot CI/CD pipelines to ensure smooth operation and quick resolution of issues. Implement and maintain robust monitoring and alerting tools to ensure system reliability. Work with various tools and technologies such as Git, Jenkins, Docker, Kubernetes, and cloud platforms (e.g., AWS, Azure). Ensure compliance with security standards and best practices throughout the development lifecycle. Continuously improve the CI/CD processes by incorporating new tools, techniques, and best practices. Provide training and guidance to team members on DevOps principles and practices. You will be responsible for leading a team and guiding them for optimum output. Mandatory skills Strong experience in software development and system administration. Proficiency in programming languages such as Python, Java, or similar. Strong understanding of CI/CD concepts and experience with tools like Jenkins, Git, Docker, and Kubernetes. Experience with cloud platforms such as AWS, Azure, or Google Cloud. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced, dynamic environment. Desired skills Experience with infrastructure such as code (IaC) tools like Terraform or Ansible. Knowledge of container orchestration tools like Kubernetes or Rancher Familiarity with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Certification in AWS, Azure, or other relevant technologies. Preferred Qualifications: Experience 6 to 9 years’ experience from Telecommunication or Networking background. Education B.Tech/BE (CSE/ECE/EEE/IS) or any other equivalent degree Candidate should be good at coding skills in CI CD, Devops with Java . Diversity and Inclusion Statement : Tejas Networks is an equal opportunity employer. We celebrate diversity and are committed to creating all inclusive environment for all employees. We welcome applicants of all backgrounds regardless of race color, religion, gender, sexual orientation, age or veteran status. Our goal is to build a workforce that reflects the diverse communities we serve and to ensure every employee feels valued and respected.
Posted 1 month ago
12.0 - 17.0 years
9 - 13 Lacs
Noida
Work from Office
We are looking for a skilled Technical Lead with 12 to 17 years of experience to lead our team in designing and delivering innovative solutions for our manufacturing clients. The ideal candidate will have a strong background in data integration and cloud platforms, with expertise in Cloud Data Quality, Data Integration, and Cloud Data Console. Roles and Responsibility Design and deliver high-quality solutions for manufacturing clients using cloud-based technologies. Lead a team of 15 members, providing guidance on solution design and delivery governance. Collaborate with stakeholders to align technical solutions with business objectives. Develop and maintain architecture, ensuring scalability and reliability. Oversee the delivery process, identifying and mitigating potential risks. Ensure compliance with industry standards and best practices. Job Strong expertise in Cloud Data Quality, Data Integration, and Cloud Data Console. Experience leading a team of engineers, focusing on solution design and delivery governance. Hands-on role requiring responsibility for architecture, delivery governance, and stakeholder alignment. Strong understanding of cloud-based technologies and their applications in manufacturing. Excellent communication and leadership skills, with the ability to motivate and guide a team. Ability to work in a fast-paced environment, prioritizing multiple tasks and meeting deadlines. Preference for candidates based in Tier 1 cities. Contract duration6-12 months (extendable).
Posted 1 month ago
7.0 - 12.0 years
5 - 9 Lacs
Noida
Work from Office
We are looking for a skilled ETL Ab Initio Developer with 7 to 12 years of hands-on experience in Ab Initio ETL development to join our team for a high-impact mainframe-to-Ab Initio data transformation project. The ideal candidate will have deep technical knowledge and hands-on experience in Ab Initio and will play a critical role in designing, developing, and optimizing complex ETL workflows. Roles and Responsibility Lead and contribute to the development of large-scale mainframe-to-Ab Initio transformation projects. Design, develop, and maintain robust ETL workflows using Ab Initio tools for data extraction, transformation, and loading from various structured/unstructured sources to target platforms. Build reusable, generic Ab Initio components and leverage ExpressIt, continuous and batch flows effectively. Collaborate with business analysts, data architects, and stakeholders to understand data requirements and translate them into effective ETL solutions. Perform performance tuning and optimization of existing Ab Initio graphs to ensure scalability and efficiency. Implement complex data cleansing, transformation, and aggregation logic. Ensure code quality through unit testing, debugging, and peer code reviews. Troubleshoot and resolve production issues with a strong sense of urgency and accountability. Continuously seek process improvements and automation opportunities in ETL workflows. Job Minimum 7 years of hands-on experience in Ab Initio ETL development. Strong experience in designing and building modular and reusable Ab Initio components. In-depth knowledge of Ab Initio GDE, EME, ExpressIt, Continuous Flows, and Testing Frameworks. Solid understanding of data warehousing concepts, data modeling, and performance tuning. Excellent analytical and problem-solving skills with strong attention to detail. Ability to work independently with minimal supervision and collaborate in a team setting. Effective communication and stakeholder management skills. Experience with production support and real-time issue resolution. Familiarity with Agile methodologies and working in Agile/Scrum teams. Experience with mainframe data sources and legacy systems integration is preferred. Prior experience in large enterprise-scale ETL transformation initiatives is preferred. Exposure to cloud platforms or data migration to cloud-based data lakes is a plus.
Posted 1 month ago
8.0 - 12.0 years
11 - 15 Lacs
Noida
Work from Office
We are looking for a skilled Reltio Architect with 8 to 12 years of experience to lead the design and implementation of enterprise-level MDM solutions using the Reltio Cloud platform. This position is based in Ranchi and Noida. Roles and Responsibility Lead the design and architecture of Reltio-based MDM solutions for large-scale enterprise systems. Collaborate with data governance, analytics, and business teams to define data domains and governance policies. Define data models, match rules, survivorship, hierarchies, and integration strategies. Provide technical leadership for Reltio implementations including upgrades, optimizations, and scaling. Conduct solution reviews and troubleshoot complex data integration or performance issues. Mentor developers and ensure technical deliverables meet architectural standards. Job Minimum 8 years of experience in MDM, with at least 3+ years in Reltio Cloud MDM. Expertise in Reltio data modeling, workflow design, integration strategy, match/merge, and hierarchy management. Experience designing large-scale Reltio implementations across multiple domains. Hands-on experience with Reltio APIs, Reltio Integration Hub, and Informatica/IICS. Strong background in enterprise architecture, data strategy, and cloud platforms (AWS/GCP/Azure). Strong problem-solving, leadership, and communication skills.
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Database Engineer with 5 to 10 years of experience to design, develop, and maintain our database infrastructure. This position is based remotely. Roles and Responsibility Design, develop, and maintain the database infrastructure to store and manage company data efficiently and securely. Work with databases of varying scales, including small-scale and big data processing. Implement data security measures to protect sensitive information and comply with relevant regulations. Optimize database performance by analyzing query execution plans, implementing indexing strategies, and improving data retrieval and storage mechanisms. Collaborate with cross-functional teams to understand data requirements and support the design of the database architecture. Migrate data from spreadsheets or other sources to relational database systems or cloud-based solutions like Google BigQuery and AWS. Develop import workflows and scripts to automate data import processes. Ensure data integrity and enforce data quality standards, including data validation rules, constraints, and referential integrity. Monitor database health and resolve issues, while collaborating with the full-stack web developer to implement efficient data access and retrieval mechanisms. Demonstrate creativity in problem-solving and contribute ideas for improving data engineering processes and workflows, exploring third-party technologies as alternatives to legacy approaches for efficient data pipelines. Embrace a learning mindset, staying updated with emerging database technologies, tools, and best practices, and use Python for tasks such as data manipulation, automation, and scripting. Collaborate with the Data Research Engineer to estimate development efforts and meet project deadlines, taking accountability for achieving development milestones. Prioritize tasks to ensure timely delivery in a fast-paced environment with rapidly changing priorities, while also collaborating with fellow members of the Data Research Engineering Team as required. Perform tasks with precision and build reliable systems, leveraging online resources effectively like StackOverflow, ChatGPT, Bard, etc., considering their capabilities and limitations. Job Proficiency in SQL and relational database management systems like PostgreSQL or MySQL, along with database design principles. Strong familiarity with Python for scripting and data manipulation tasks, with additional knowledge of Python OOP being advantageous. Demonstrated problem-solving skills with a focus on optimizing database performance and automating data import processes. Knowledge of cloud-based databases like AWS RDS and Google BigQuery. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently. About Company Marketplace is an experienced team of industry experts dedicated to helping readers make informed decisions and choose the right products with ease. We arm people with trusted advice and guidance, so they can make confident decisions and get back to doing the things they care about most.
Posted 1 month ago
3.0 - 7.0 years
3 - 7 Lacs
Noida
Work from Office
We are looking for a skilled Backend Developer with strong experience in back-end development to contribute to our Smart Building Platform 2.0 (SBP 2.0) initiative, located in Bengaluru. The ideal candidate will have 3 to 7 years of experience. Roles and Responsibility Design, develop, and maintain robust back-end solutions using SkySpark, particularly Axon and Fantom scripting. Collaborate with cross-functional teams to implement scalable and high-performance features. Contribute to technical discussions and provide solutions aligned with project goals and architectural guidelines. Ensure code quality through best practices, code reviews, and unit testing. Participate in Agile development processes, including sprint planning, daily standups, and retrospectives. Develop and maintain high-quality, reliable, and scalable software systems. Job Proven experience with the SkySpark platform, including hands-on work with Axon and Fantom. Strong understanding of back-end architecture, data models, and integration patterns. Familiarity with building automation systems or IoT platforms is a plus. Ability to write clean, maintainable, and well-documented code. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Experience with building management systems (BMS), energy analytics, or smart infrastructure projects is desirable. Exposure to containerization, cloud platforms, or DevOps tools is beneficial.
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Noida
Work from Office
We are looking for a skilled Senior Security Technical Analyst to join our team in Bangalore. The ideal candidate will have 7 to 12 years of experience in Information Security, with at least 3 years focused on SaaS security or cloud platforms. Roles and Responsibility Ensure ongoing discovery and classification of SaaS usage across the organization, leveraging CASB and other telemetry to identify unsanctioned platforms. Implement and validate controls to ensure all SaaS platforms meet minimum security requirements, such as SSO, MFA, RBAC, logging, IP restrictions, and encryption. Oversee proper identity and access controls, secure API integrations, and enforcement of data classification, retention, and encryption policies. Monitor, alert, and incident readiness for SaaS platforms, ensuring logs integration with enterprise SIEM (e.g., Splunk) and real-time alerting. Maintain visibility into SaaS configurations, ensuring changes follow Broadridge change control standards and verifying lower environments are also governed appropriately. Conduct technical risk assessments for SaaS vendors and support incident response procedures. Job Bachelor's degree in computer science, information technology, or a related field. Minimum 7 years of experience in Information Security, with at least 3 years focused on SaaS security or cloud platforms. Strong understanding of SaaS-specific risks, architecture, and controls. Experience working with CASB, SSPM, and SIEM tools (e.g., Microsoft Defender, Splunk). Understanding of identity and access management in the context of SaaS platforms and integrations with other systems. Excellent written and verbal communication skills, with the ability to articulate technical topics clearly. Strong analytical skills and attention to detail. Ability to work independently in a global, matrixed organization. Comfortable working in rotational shifts and managing competing priorities. Preferred CertificationsCCSK, CRISC, CISA, ISO 27001, or similar cloud/security-related certifications. Experience working in financial services or other highly regulated environments.
Posted 1 month ago
6.0 - 11.0 years
8 - 12 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=6 to 12 , jd= Role Java + Python + AI Developer Experience: 6+ Years Location: (Remote/Hybrid opportunities available) Employment Type: Sub-Con Key Responsibilities Design, develop, and maintain enterprise-level applications using Java and Python. Integrate AI/ML models into production-grade systems. Collaborate with cross-functional teams to understand business needs and translate them into technical solutions. Optimize application performance and scalability. Participate in code reviews, design discussions, and contribute to a culture of technical excellence. 6+ years of experience in backend development with Java and Python. Hands-on experience with AI/ML libraries and frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). Solid understanding of OOPs, REST APIs, and microservices architecture. Experience with cloud platforms like AWS, Azure, or GCP is a plus. Strong problem-solving skills and ability to work independently or in a team. Excellent communication and collaboration skills. Nice to Have Exposure to NLP, Computer Vision, or data analytics. Experience in deploying ML models in production. Familiarity with containerization tools (Docker, Kubernetes). , Title=Java + Python + AI Developer, ref=6566505
Posted 1 month ago
5.0 - 10.0 years
5 - 8 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=5 to 12 , jd= Job Role Cloud SecurityJob Type FTE Job Location Bangalore JD 1) Cloud security Summary: The Cloud Security Specialist drives security strategy and architecture for our cloud initiatives, combining technical expertise with strategic thinking. They collaborate across teams as a Subject Matter Expert, promoting Everything as Code and empowering teams to tackle cloud security challenges proactively. Role and Responsibilities: Provide expert level guidance to facilitate the implementation and evolution of secure cloud and container architectures, including robust controls and best practices across various cloud service models such as IaaS, PaaS, SaaS, and hybrid configurations. Assist in the evolution of continuous monitoring solutions to validate systems against security baselines, promptly respond to policy violations, and ensure adherence to security standards and compliance requirements. Identify, evaluate, and propose innovative technology solutions for cloud and container environments aimed at enhancing process efficiency, automation, security, environment visibility, developer enablement, and streamlining processes. Collaborate proactively with developers, system administrators, and IT management to ensure that security controls and processes align with company directives and goals, promoting secure-by-design principles. Collaborate with cross-functional teams to design and implement secure cloud architectures, encompassing network security, identity and access management (IAM), data encryption, and other essential security controls. Ensure compliance with relevant security standards, regulations, and frameworks (e.g., GDPR, HIPAA, ISO 27001) across all cloud-based initiatives and deployments. Explore opportunities to introduce automation and innovative technologies in cloud security processes, aiming to enhance efficiency, reduce manual efforts, and strengthen overall security posture. Provide input into the design and deployment of automated security solutions, leveraging expertise to enhance the efficacy and scalability of security measures. Provide guidance and training to internal teams on cloud security best practices, emerging threats, and security awareness to foster a culture of security across the organization. Analyze the latest attacker techniques and implement solutions to mitigate associated risks, ensuring the resilience of cloud environments against evolving threats. Stay abreast of the latest cybersecurity threats and trends, proactively identifying potential vulnerabilities and recommending proactive measures to mitigate risks. : Bachelor’s degree in computer science, Information Technology, or Technology related field. Advanced degree or relevant certifications (e.g., CISSP, CCSP, AWS Certified Security – Specialty) preferred. Seven years of experience in one, or a combination, of network, application, cloud, or infrastructure security domain, showcasing a comprehensive understanding of security principles and practices. Demonstrated expertise in cloud platforms like AWS, Azure, and Google Cloud, including a deep understanding of security features such as IAM, VPC, Security Groups, and encryption services. Strong familiarity with networking concepts, protocols, and security principles, enabling the design and implementation of secure network architectures. Demonstrated experience in cloud-native architectures, microservices, and operational best practices in cloud and container orchestration. Experience integrating enterprise-scale security solutions in AWS and/or Azure, encompassing user, security, and networking configurations to ensure robust security postures. Proficiency in full stack cloud automation using tools like Git, Terraform, Ansible, and Jenkins, with past programming experience, and knowledge of Python is a plus. Experience aligning security programs with industry benchmarks and standards such as NIST, CIS, FIPS, PCI DSS, HIPAA, and FIPS 140-2, ensuring adherence to best practices. Strong understanding of IT Risk Management, Security Policies and Procedures, Internal Audit, and Compliance Standards. Familiarity with SOC, FFIEC, CSA, and FedRAMP is a plus. Excellent communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and capability to communicate technical concepts to non-technical stakeholders. Proven ability to work independently, prioritize tasks, and manage multiple projects simultaneously in a fast-paced environment, ensuring timely and efficient completion of objectives. Commitment to continuous learning and staying updated on industry developments and emerging technologies, coupled with adaptability to evolving technology environments and requirements. Capacity to convey complex ideas effectively, providing definitive direction and guidance on cloud security issues to drive results and mitigate risks effectively. , Title=Cloud Security, ref=6566288
Posted 1 month ago
4.0 - 6.0 years
3 - 6 Lacs
Noida
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=4 to 6 , jd=10 BDC7A SummaryAs a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with cross-functional teams to design and implement data platform solutions.- Develop and maintain data pipelines for efficient data processing.- Optimize data storage and retrieval processes for improved performance.- Implement data governance policies and ensure data quality standards are met.- Stay updated with industry trends and best practices in data engineering. Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Building Tool.- Strong understanding of data modeling and database design principles.- Experience in ETL processes and data integration techniques.- Knowledge of cloud platforms and services for data storage and processing.- Hands-on experience with data visualization tools for reporting and analysis. Additional Information- The candidate should have a minimum of 3 years of experience in Data Building Tool.- This position is based at our Bengaluru office.- A 15 years full time education is required., Title=Data Building Tool, ref=6566428
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Noida
Work from Office
About the role: UKG is looking for a highly skilled Lead Software Engineer who will take a leadership role in our engineering team. As our primary DevOps expert, you will be responsible for designing and implementing scalable, reliable, and secure infrastructure solutions, as well as leading and mentoring other members of the DevOps team. Your extensive knowledge of Terraform and Ansible, coupled with your expertise in cloud technologies, will be instrumental in shaping our infrastructure and deployment strategies. Duties and Responsibilities: Lead the design, implementation, and management of our cloud infrastructure using Terraform, ensuring best practices for scalability, resiliency, and security Develop and maintain highly efficient Ansible playbooks and establish configuration management best practices Provide technical leadership and mentorship to the DevOps team, fostering a culture of continuous learning and improvement Collaborate with cross-functional teams to ensure the smooth integration and deployment of applications Optimize infrastructure performance, monitoring ongoing operations, and implementing proactive solutions for issues and bottlenecks Establish and maintain CI/CD pipelines, designing and implementing automated testing and release processes Evaluate and recommend new tools and technologies to enhance the DevOps workflow and improve efficiency Act as a subject matter expert for DevOps practices, staying up to date with the latest industry trends and best practicesAbout you:Basic Qualifications: Proven experience as a Lead DevOps Engineer, leading the design and implementation of complex and scalable infrastructures Extensive expertise in Terraform and Ansible, with a deep understanding of their capabilities and best practices Strong knowledge of cloud platforms (AWS, Azure, or GCP) and proficiency in designing and managing cloud resources Excellent scripting and automation skills using languages like Python, Bash, or PowerShell Proven ability to lead and mentor a team, fostering a collaborative and high-performance culture. Ability to troubleshoot and resolve complex infrastructure issues in production environmentsPreferred Qualifications: In-depth knowledge of containerization technologies such as Docker and orchestration tools like Kubernetes. Solid understanding of networking concepts, security principles, and infrastructure hardening practices. Experience with CI/CD tools like GitHub Actions, Jenkins, GitLab CI/CD, or CircleCI Preferred certifications in Terraform and/or Google Cloud Platform (GCP) Exceptional problem-solving and communication skills, with the ability to effectively collaborate with cross-functional teams Bachelor's or master's degree in computer science, engineering, or a related field
Posted 1 month ago
8.0 - 13.0 years
2 - 30 Lacs
Bengaluru
Work from Office
A Snapshot of Your Day On a typical day, you will lead the design and implementation of scalable ETL/ELT data pipelines using Python or C#, while managing cloud-based data architectures on platforms like Azure and AWS Youll collaborate with data scientists and analysts to ensure seamless data integration for analysis and reporting, and mentor junior engineers on standard processes Additionally, you will supervise and optimize data pipelines for performance and cost efficiency, while ensuring compliance with data security and governance regulations How Youll Make An Impact For our Onshore Execution Digital Product Development team, we are looking for a highly skilled Data Engineer with 8-10 years of experience to join our team In this role, you will take ownership of designing and implementing data pipelines, optimizing data workflows, and supporting the data infrastructure You will work with large datasets, cloud technologies while ensuring data quality, performance, and scalability Lead the design and implementation of scalable ETL/ELT data pipelines using Python or C# for efficient data processing Architect data solutions for large-scale batch and real-time processing using cloud services (AWS, Azure, Google Cloud) Craft and manage cloud-based data architectures with services like AWS Redshift, Google BigQuery, Azure Data Lake, and Snowflake Implement cloud data solutions using Azure services such as Azure Data Lake, Blob Storage, SQL Database, Synapse Analytics, and Data Factory Develop and automate data workflows for seamless integration into Azure platforms for analysis and reporting Manage and optimize Azure SQL Database, Cosmos DB, and other databases for high availability and performance Supervise and optimize data pipelines for performance and cost efficiency Implement data security and governance practices in compliance with regulations (GDPR, HIPAA) using Azure security features Collaborate with data scientists and analysts to deliver data solutions that meet business analytics needs Mentor junior data engineers on standard processes in data engineering and pipeline design Set up supervising and alerting systems for data pipeline reliability Ensure data accuracy and security through strong governance policies and access controls Maintain documentation for data pipelines and workflows for transparency and onboarding What You Bring 8-10 years of proven experience in data engineering with a focus on large-scale data pipelines and cloud infrastructure Strong expertise in Python (Pandas, NumPy, ETL frameworks) or C# for efficient data processing solutions Extensive experience with cloud platforms (AWS, Azure, Google Cloud) and their data services Sophisticated knowledge of relational (PostgreSQL, MySQL) and NoSQL databases (MongoDB, Cassandra) Familiarity with big data technologies (Apache Spark, Hadoop, Kafka) Strong background in data modeling and ETL/ELT development for large datasets Experience with version control (Git) and CI/CD pipelines for data solution deployment Excellent problem-solving skills for troubleshooting data pipeline issues Experience in optimizing queries and data processing for speed and cost-efficiency Preferred: Experience integrating data pipelines with machine learning or AI models Preferred: Knowledge of Docker, Kubernetes, or containerized services for data workflows Preferred: Familiarity with automation tools (Apache Airflow, Luigi, DBT) for managing data workflow Preferred: Understanding of data privacy regulations (GDPR, HIPAA) and governance practices About The Team Who is Siemens Gamesa Siemens Gamesa is part of Siemens Energy, a global leader in energy technology with a rich legacy of innovation spanning over 150 years Together, we are committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible As a leading player in the wind industry and manufacturer of wind turbines, we are passionate about driving the energy transition and providing innovative solutions that meet the growing energy demand of the global community At Siemens Gamesa, we are always looking for dedicated individuals to join our team and support our focus on energy transformation Our Commitment to Diversity Lucky for us, we are not all the same Through diversity, we generate power We run on inclusion and our combined creative energy is fueled by over 130 nationalities Siemens Energy celebrates character no matter what ethnic background, gender, age, religion, identity, or disability We energize society, all of society, and we do not discriminate based on our differences Rewards/Benefits All employees are automatically covered under the Medical Insurance Company paid considerable Family floater cover covering employee, spouse and 2 dependent children up to 25 years of age Siemens Gamesa provides an option to opt for Meal Card to all its employees which will be as per the terms and conditions prescribed in the company policy as a part of CTC, tax saving measure We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment Please contact us to request accommodation
Posted 1 month ago
3.0 - 7.0 years
5 - 9 Lacs
Pune
Work from Office
team is looking for an experienced Java-based Middle-Tier developer to help build our data integration layer utilizing the latest tools and technologies. In this critical role you will become part of a motivated and talented team operating within a creative environment. You should have a passion for writing and designing Server-Side, cutting edge applications, that push the boundaries of what is possible and exists within the bank today. Your key responsibilities Your Role - What Youll Do As a Java Microservices engineer you would be responsible for designing, developing and maintaining scalable microservices using Java & Spring Boot. You will collaborate with cross-functional teams to deliver the features/enhancements in time by ensuring code quality and support the overall business requirements. Key Responsibilities: Develop and maintain scalable and reliable microservices using Java, Spring Boot and related technologies. Implement RESTful APIs and support integrations with other systems. Collaborate with various stakeholders QA, DevOps, PO and Architects to ensure the business requirements are met. Participate in code reviews, troubleshooting and mentoring junior members. Your skills and experience Skills Youll Need: Must Have: Overall experience of 5+ years with hands-on coding/engineering skills extensively in Java technologies and microservices. Strong understanding of Microservices architecture, patterns and practices. Proficiency in Spring Boot, Spring Cloud, development of REST APIs Desirable skills that will help you excel Prior experience working in Agile/scum environment. Good understanding of containerization (Docker/Kubernetes), databases (SQL & No SQL), Build tools (Maven/Gradle). Knowledge of the Architecture and Design Principles, Algorithms and Data Structures, and UI. Exposure to cloud platforms is a plus (preferably GCP). Knowledge of Kafka, RabbitMQ etc., would be a plus. Strong problem solving and communications skills. Working knowledge of GIT, Jenkins, CICD, Gradle, DevOps and SRE techniques Educational Qualifications Bachelors degree in Computer Science/Engineering or relevant technology & science Technology certifications from any industry leading cloud providers
Posted 1 month ago
3.0 - 7.0 years
12 - 15 Lacs
Bengaluru
Work from Office
About HashiCorp HashiCorp solves development, operations, and security challenges in infrastructure so organizations can focus on business-critical tasks We build products to give organizations a consistent way to manage their move to cloud-based IT infrastructures for running their applications Our products enable companies large and small to mix and match AWS, Microsoft Azure, Google Cloud, and other clouds as well as on-premises environments, easing their ability to deliver new applications We use the Tao of HashiCorp as our guiding principles for product development and operate according to a strong set of company principles for how we interact with each other We value top-notch collaboration and communication skills, both among internal teams and in how we interact with our users Our Team The HashiCorp Incident Excellence team is responsible for improving HashiCorps incident response while maximizing learning from incidents Our focus is on helping all engineers feel confident when they are on-call and improving communication to efficiently resolve incidents and build trust in our brand We partner closely with teams to drive a holistic incident management strategy and share learnings to help our business continuously improve About This Role This engineering role is on a nascent engineering team The team is responsible for products that touch many areas of engineering organizations at HashiCorp, so applicants will need to excel at collaboration, have product-focused mindsets, and be comfortable iterating in an agile manner towards solutions You will provide expert execution of the incident command process, including running and managing high-severity incident bridges and driving transparent communication that promotes maximum levels of internal and external customer satisfaction Collaborate with an array of technical stakeholders and executives to drive resolution during incidents and improve overall response for future incidents and technical escalations Utilize top-notch troubleshooting techniques to identify, organize, and advocate for novel solutions to remediate customer impact on complex interconnected systems Participate in a closed-loop post-incident learning process, driving insights and meaningful action Iterative improvements in response through consistent drills, tabletops, and game-day exercises Push the boundaries of innovation in incident management to deliver best-in-class incident response In This Role, You Can Expect To Be responsible for and drive incident management capabilities and culture Contribute to incident command on-call Build technical skills and relationships within a team of engineers and SREs Lead and refine our incident response strategy, ensuring rapid and effective response to operational disruptions Analyze incident trends and root causes to drive continuous improvements in system reliability and response processes Develop and maintain tools for incident detection, analysis, and resolution, automating responses where possible to minimize human intervention Create comprehensive incident response documentation and conduct training sessions to prepare all relevant teams for effective incident handling Work closely with development, operations, and security teams to coordinate incident response efforts and post-incident analyses You may be a good fit for our team if: Minimum 10 12 years of experience in site reliability engineering, systems administration, or software engineering, with a significant focus on incident response and operational reliability 8+ years managing, coordinating, and ensuring resolution of major incidents Professional experience with incident management in cloud environments Enjoy working on a variety of scopes spanning software engineering, cloud infrastructure, and SRE Proven track record of managing and resolving incidents in cloud-based environments, with expertise in major public cloud platforms (AWS, GCP, Azure) Understanding of fundamental network technologies like DNS, Load Balancing, SSL, TCP/IP, HTTP Strong understanding of monitoring and alerting systems, with the ability to develop metrics and alarms that accurately reflect system health and operational risks Experience with incident management tools and practices, including post-mortem analysis and root cause investigation Passion for consistently responding to and leading complex incidents in a 24x7x365 environment utilizing a globalized follow-the-sun model Customer-centric attitude with a focus on providing best-in-class incident response for customers and stakeholders Familiarity with HashiCorps product suite and infrastructure automation tools is a plus Demonstrate strong leadership skills during periods of significant business impact, remaining calm and professional during high-pressure situations A strong desire to drive customer success with partner teams and management on high-profile issues critical to the long-term success of the business Outstanding verbal and written communication skills with the ability to convey information in a meaningful way to both engineers and executive-level management, during and outside of incidents Adaptable to a wide variety of technologies and capable of incident response and troubleshooting activities in complex interconnected environments ?HashiCorp is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization HashiCorp will be the hiring entity By proceeding with this application you understand that HashiCorp will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: link to IBM privacy statement ?
Posted 1 month ago
6.0 - 11.0 years
13 - 17 Lacs
Gurugram
Work from Office
Department: Engineering Employment Type: Full Time Location: India Description Shape the Future of Work with Eptura At Eptura, we're not just another tech company?we're a global leader transforming the way people, workplaces, and assets connect Our innovative worktech solutions empower 25 million users across 115 countries to thrive in a digitally connected world Trusted by 45% of Fortune 500 companies, we're redefining workplace innovation and driving success for organizations around the globe Job Description We are seeking a Technical Lead Data Engineering to spearhead the design, development, and optimization of complex data pipelines and ETL processes This role requires deep expertise in data modeling, cloud platforms, and automation to ensure high-quality, scalable solutions You will collaborate closely with stakeholders, engineers, and business teams to drive data-driven decision-making across our organization Responsibilities Work with stakeholders to understand data requirements and architect end-to-end ETL solutions Design and maintain data models, including schema design and optimization Develop and automate data pipelines to ensure quality, consistency, and efficiency Lead the architecture and delivery of key modules within data platforms Build and refine complex data models in Power BI, simplifying data structures with dimensions and hierarchies Write clean, scalable code using Python, Scala, and PySpark (must-have skills) Test, deploy, and continuously optimize applications and systems Mentor team members and participate in engineering hackathons to drive innovation About You 7+ years of experience in Data Engineering, with at least 2 years in a leadership role Strong expertise in Python, PySpark, and SQL for data processing and transformation Hands-on experience with Azure cloud computing, including Azure Data Factory and Databricks Proficiency in Analytics/Visualization tools: Power BI, Looker, Tableau, IBM Cognos Strong understanding of data modeling, including dimensions and hierarchy structures Experience working with Agile methodologies and DevOps practices (GitLab, GitHub) Excellent communication and problem-solving skills in cross-functional environments Ability to reduce added cost, complexity, and security risks with scalable analytics solutions Nice To Have Experience working with NoSQL databases (Cosmos DB, MongoDB) Familiarity with AutoCAD and building systems for advanced data visualization Knowledge of identity and security protocols, such as SAML, SCIM, and FedRAMP compliance Benefits Health insurance fully paidSpouse, children, and Parents Accident insurance fully paid Flexible working allowance 25 days holidays 7 paid sick days 10 public holidays Employee Assistance Program Eptura Information Follow us on Twitter | LinkedIn | Facebook | YouTube Eptura is an Equal Opportunity Employer At Eptura we promote our flexible workspace environment, free from discrimination We believe that diversity of experience, perspective, and background leads to a better environment for all our people and a better product for our customers Everyone is welcome at Eptura, no matter where you are from, and the more diverse we are, the more unified we will be in ensuring respectful connections all around the world
Posted 1 month ago
1.0 - 5.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Position Title: R&D AI/ML Product Engineer About The Job At Sanofi, were committed to providing the next-gen healthcare that patients and customers need Its about harnessing data insights and leveraging AI responsibly to search deeper and solve sooner than ever before Join our R&D Data & AI Products and Platforms Team as AI/ML Product Engineer and you can help make it happen What you will be doing: Sanofi has recently embarked into a vast and ambitious digital transformation program A cornerstone of this roadmap is the acceleration of its data transformation and of the adoption of artificial intelligence (AI) and machine learning (ML) solutions, to accelerate R&D, manufacturing and commercial performance and bring better drugs and vaccines to patients faster, to improve health and save lives The R&D Data & AI Products and Platforms Team is a key team within R&D Digital, focused on developing and delivering Data and AI products for R&D use cases This team plays a critical role in pursuing broader democratization of data across R&D and providing the foundation to scale AI/ML, advanced analytics, and operational analytics capabilities As a AI/ML Product Engineer, you will join this dynamic team committed to driving strategic and operational digital priorities and initiatives in R&D You will work as a part of a Data & AI Product Delivery Pod, lead by a Product Owner, in an agile environment to deliver Data & AI Products As a part of this team, you will be responsible for the designing and developing the endpoints or databases where AI/ML models can be deployed and accessed You will have the ability to work on multiple AI/ML products serving multiple areas of the business Our vision for digital, data analytics and AI Join us on our journey in enabling Sanofis Digital Transformation through becoming an AI first organization This means: AI Factory Versatile Teams Operating in Cross Functional Pods: Utilizing digital and data resources to develop AI products, bringing data management, AI and product development skills to products, programs and projects to create an agile, fulfilling and meaningful work environment Leading Edge Tech Stack: Experience build products that will be deployed globally on a leading-edge tech stack World Class Mentorship and Training: Working with renown leaders and academics in machine learning to further develop your skillsets We are an innovative global healthcare company with one purpose: to chase the miracles of science to improve peoples lives Were also a company where you can flourish and grow your career, with countless opportunities to explore, make connections with people, and stretch the limits of what you thought was possible Ready to get started Main Responsibilities AI /ML Product Engineering: Provide input into the engineering feasibility of developing specific R&D AI Products Provide input to Data/AI Product Owner and Scrum Master to support with planning, capacity, and resource estimates Collaborate with AI/ML model development teams to understand the inner workings of the AI/ML model and handoffs Engineer AI/ML product (incl workflows, API endpoints, databases) based on defined requirements for specific use cases, typically involving the processing of data directly from an AI/ML model and subsequently consuming the data output from the refined AI/ML model Develop an intuitive user interface for users to interact with the AI/ML model Collaborate with Data/AI Product Owner and Scrum Master to share progress on engineering activities and inform of any delays, issues, bugs, or risks with proposed remediation plans Design, develop, and deploy APIs, data feeds, or specific features required by product design and user stories Optimize workflows to drive high performance and reliability of implemented AI products Oversee and support junior engineer with Data/AI Product testing requirements and execution Innovation & Team Collaboration Stay current on industry trends, emerging technologies, and best practices in data product engineering Contribute to a team culture of of innovation, collaboration, and continuous learning within the product team About You Key Functional Requirements & Qualifications: Bachelors degree in software engineering or related field, or equivalent work experience 3-5 years of experience in AI/ML product engineering, software engineering, or other related field Deep understanding and proven track record of developing data pipelines and workflows Understanding of R&D business and data environment preferred Excellent communication and collaboration skills Working knowledge and comfort working with Agile methodologies Key Technical Requirements & Qualifications Proficiency with data analytics and statistical software (incl SQL, Python, Java, Excel, AWS, Snowflake, Informatica) Experience with the design and development of APIs/Endpoints (e g , Flask, Django, FastAPI) Expertise in cloud platforms and software involved in the deployment and scaling of AI/ML models Why Choose Us Bring the miracles of science to life alongside a supportive, future-focused team Discover endless opportunities to grow your talent and drive your career, whether its through a promotion or lateral move, at home or internationally Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs Pursue Progress Discover Extraordinary Better is out there Better medications, better outcomes, better science But progress doesnt happen without people people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen So, lets be those people Watch our ALL IN video and check out our Diversity, Equity and Inclusion actions at sanofi com! Sanofi is an equal opportunity employer committed to diversity and inclusion Our goal is to attract, develop and retain highly talented employees from diverse backgrounds, allowing us to benefit from a wide variety of experiences and perspectives We welcome and encourage applications from all qualified applicants Accommodations for persons with disabilities required during the recruitment process are available upon request Thank you in advance for your interest Only those candidates selected for interviews will be contacted null
Posted 1 month ago
5.0 - 9.0 years
11 - 15 Lacs
Gurugram
Work from Office
Job Title: Front-End Architect Job Type: Full-time Location: Hybrid Gurugram, Haryana, India, Gurgaon Division About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market Job Summary: Join our customers team as a Front-End Architect and play a pivotal role in shaping robust, scalable, and high-performance web applications You will lead the design and technical direction of modern user interfaces, embedding best practices across architecture, cloud-native design, and frontend-backend integration This is an exciting opportunity to make a tangible impact on the end-user experience while working in a collaborative, innovation-driven environment Key Responsibilities: Define and lead the front-end architecture using React js and associated frameworks to build scalable and maintainable applications Design and implement reusable, modular UI component libraries to drive consistency and efficiency across projects Collaborate closely with backend and DevOps teams to ensure seamless integration with RESTful or Fast APIs, aligning architecture for optimal performance Engineer cloud-optimized frontend solutions leveraging AWS or Azure, with experience in serverless web app architectures Utilize CDN, edge caching, and performance optimization techniques to deliver low-latency, globally distributed user experiences Champion infrastructure-as-code and CI/CD pipelines tailored for streamlined frontend deployment and rollback strategies Mentor and guide UI and API developers, facilitating seamless integration and upholding code quality standards Engage with clients to discuss solution design and architecture, articulating technical concepts in a clear, compelling manner Required Skills and Qualifications: 6+ years of hands-on experience in front-end development, with expert-level proficiency in React js and modern JavaScript Demonstrated expertise in designing scalable front-end architectures and reusable component libraries Strong background in integrating with RESTful/Fast APIs and collaborating within cross-functional, agile teams In-depth knowledge of cloud platforms (AWS or Azure) and cloud-native development patterns Experience with performance tuning: CDN, caching, state management, and responsive design principles Proficiency in setting up and maintaining CI/CD pipelines and infrastructure-as-code for frontend projects Exceptional written and verbal communication skills, with a proven ability to document and present complex architectural concepts Preferred Qualifications: Experience designing and deploying serverless architectures for frontend applications Familiarity with security best practices in cloud-based frontend deployments Past experience leading technical client discussions and requirements gathering sessions
Posted 1 month ago
3.0 - 6.0 years
6 - 10 Lacs
Gurugram
Work from Office
Job Title: Senior Frontend Developer Job Type: Full-time Location: Hybrid Gurugram, Haryana, India, Gurgaon Division About Us: Our mission at micro1 is to match the most talented people in the world with their dream jobs If you are looking to be at the forefront of AI innovation and work with some of the fastest-growing companies in Silicon Valley, we invite you to apply for a role By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market Job Summary: Join our customers team as a Senior Frontend Developer and play a pivotal role in shaping robust, scalable, and high-performance web applications You will lead the design and technical direction of modern user interfaces, embedding best practices across architecture, cloud-native design, and frontend-backend integration This is an exciting opportunity to make a tangible impact on the end-user experience while working in a collaborative, innovation-driven environment Key Responsibilities: Define and lead the front-end architecture using React js and associated frameworks to build scalable and maintainable applications Design and implement reusable, modular UI component libraries to drive consistency and efficiency across projects Collaborate closely with backend and DevOps teams to ensure seamless integration with RESTful or Fast APIs, aligning architecture for optimal performance Engineer cloud-optimized frontend solutions leveraging AWS or Azure, with experience in serverless web app architectures Utilize CDN, edge caching, and performance optimization techniques to deliver low-latency, globally distributed user experiences Champion infrastructure-as-code and CI/CD pipelines tailored for streamlined frontend deployment and rollback strategies Mentor and guide UI and API developers, facilitating seamless integration and upholding code quality standards Engage with clients to discuss solution design and architecture, articulating technical concepts in a clear, compelling manner Required Skills and Qualifications: 6+ years of hands-on experience in front-end development, with expert-level proficiency in React js and modern JavaScript Demonstrated expertise in designing scalable front-end architectures and reusable component libraries Strong background in integrating with RESTful/Fast APIs and collaborating within cross-functional, agile teams In-depth knowledge of cloud platforms (AWS or Azure) and cloud-native development patterns Experience with performance tuning: CDN, caching, state management, and responsive design principles Proficiency in setting up and maintaining CI/CD pipelines and infrastructure-as-code for frontend projects Exceptional written and verbal communication skills, with a proven ability to document and present complex architectural concepts Preferred Qualifications: Experience designing and deploying serverless architectures for frontend applications Familiarity with security best practices in cloud-based frontend deployments Past experience leading technical client discussions and requirements gathering sessions
Posted 1 month ago
3.0 - 7.0 years
9 - 13 Lacs
Pune
Work from Office
Your Role - What Youll Do As a devops Engineer, you would be automating, deploying and maintaining scalable infrastructure. You have expertise in building CI/CD pipelines, containerization and cloud platforms. You will collaborate with development and operations team to high availability, performance and security of the system Key Responsibilities: Builds, enhance, and maintain CI/CD/CT automation pipeline across environments - Set up observability for the application. Monitor and report on application up-time, performance, and other metrics Your skills and experience Skills Youll Need: Must Have: - 4+ years of hands-on experience in the role of DevOps engineer Proficient in at least one Scripting language Shell, Perl, Python or Go Solid understanding of DevOps concept like Build / Deployment pipeline creation and maintenance. Desirable skills that will help you excel Awareness of the SDLC process. Automation mindset. Good understanding of SaaS vs PaaS vs IaaS. Understanding of IaaC. Exposure to API, UI & Database automation Hands on experience in database hosting activities. Basic understanding of containerization. Educational Qualifications Bachelors degree in Computer Science/Engineering or relevant technology & science Technology certifications from any industry leading cloud provider
Posted 1 month ago
5.0 - 10.0 years
25 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams
Posted 1 month ago
3.0 - 6.0 years
5 - 9 Lacs
Gurugram
Work from Office
Experience : 8-10 years. Job Title : Devops Engineer. Location : Gurugram. Job Summary. We are seeking a highly skilled and experienced Lead DevOps Engineer to drive the design, automation, and maintenance of secure and scalable cloud infrastructure. The ideal candidate will have deep technical expertise in cloud platforms (AWS/GCP), container orchestration, CI/CD pipelines, and DevSecOps practices.. You will be responsible for leading infrastructure initiatives, mentoring team members, and collaborating. closely with software and QA teams to enable high-quality, rapid software delivery.. Key Responsibilities. Cloud Infrastructure & Automation :. Design, deploy, and manage secure, scalable cloud environments using AWS, GCP, or similar platforms.. Develop Infrastructure-as-Code (IaC) using Terraform for consistent resource provisioning.. Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, AWS CodePipeline, or Azure DevOps.. Containerization & Orchestration :. Containerize applications using Docker for seamless development and deployment.. Manage and scale Kubernetes clusters (on-premise or cloud-managed like AWS EKS).. Monitor and optimize container environments for performance, scalability, and cost-efficiency.. Security & Compliance :. Enforce cloud security best practices including IAM policies, VPC design, and secure secrets management (e.g., AWS Secrets Manager).. Conduct regular vulnerability assessments, security scans, and implement remediation plans.. Ensure infrastructure compliance with industry standards and manage incident response protocols.. Monitoring & Optimization :. Set up and maintain monitoring/observability systems (e.g., Grafana, Prometheus, AWS CloudWatch, Datadog, New Relic).. Analyze logs and metrics to troubleshoot issues and improve system performance.. Optimize resource utilization and cloud spend through continuous review of infrastructure configurations.. Scripting & Tooling :. Develop automation scripts (Shell/Python) for environment provisioning, deployments, backups, and log management.. Maintain and enhance CI/CD workflows to ensure efficient and stable deployments.. Collaboration & Leadership :. Collaborate with engineering and QA teams to ensure infrastructure aligns with development needs.. Mentor junior DevOps engineers, fostering a culture of continuous learning and improvement.. Communicate technical concepts effectively to both technical and non-technical :. Education. Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent hands-on : AWS Certified DevOps Engineer Professional (preferred) or other relevant cloud :. 8+ years of experience in DevOps or Cloud Infrastructure roles, including at least 3 years in a leadership capacity.. Strong hands-on expertise in AWS (ECS, EKS, RDS, S3, Lambda, CodePipeline) or GCP equivalents.. Proven experience with CI/CD tools: Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, Azure DevOps.. Advanced knowledge of Docker and Kubernetes ecosystem.. Skilled in Infrastructure-as-Code (Terraform) and configuration management tools like Ansible.. Proficient in scripting (Shell, Python) for automation and tooling.. Experience implementing DevSecOps practices and advanced security configurations.. Exposure to data tools (e.g., Apache Superset, AWS Athena, Redshift) is a plus.. Soft Skills. Strong problem-solving abilities and capacity to work under pressure.. Excellent communication and team collaboration.. Organized with attention to detail and a commitment to Skills :. Experience with alternative cloud platforms (e.g., Oracle Cloud, DigitalOcean).. Familiarity with advanced observability stacks (Grafana, Prometheus, Loki, Datadog).. (ref:hirist.tech). Show more Show less
Posted 1 month ago
5.0 - 10.0 years
11 - 16 Lacs
Pune
Work from Office
Job Title: Engineer Java Microservices Corporate Title: Associate Location: Pune, India Role Description Our agile development team is looking for an experienced Java-based Middle-Tier developer to help build our data integration layer utilizing the latest tools and technologies. In this critical role you will become part of a motivated and talented team operating within a creative environment. You should have a passion for writing and designing Server-Side, cutting edge applications, that push the boundaries of what is possible and exists within the bank today. Your key responsibilities Your Role - What Youll Do As a Java Microservices engineer you would be responsible for designing, developing and maintaining scalable microservices using Java & Spring Boot. You will collaborate with cross-functional teams to deliver the features/enhancements in time by ensuring code quality and support the overall business requirements. Key Responsibilities: Develop and maintain scalable and reliable microservices using Java, Spring Boot and related technologies. Implement RESTful APIs and support integrations with other systems. Collaborate with various stakeholders QA, DevOps, PO and Architects to ensure the business requirements are met. Participate in code reviews, troubleshooting and mentoring junior members. Your skills and experience Skills Youll Need: Must Have: Overall experience of 5+ years with hands-on coding/engineering skills extensively in Java technologies and microservices. Strong understanding of Microservices architecture, patterns and practices. Proficiency in Spring Boot, Spring Cloud, development of REST APIs Desirable skills that will help you excel Prior experience working in Agile/scum environment. Good understanding of containerization (Docker/Kubernetes), databases (SQL & No SQL), Build tools (Maven/Gradle). Knowledge of the Architecture and Design Principles, Algorithms and Data Structures, and UI. Exposure to cloud platforms is a plus (preferably GCP). Knowledge of Kafka, RabbitMQ etc., would be a plus. Strong problem solving and communications skills. Working knowledge of GIT, Jenkins, CICD, Gradle, DevOps and SRE techniques Educational Qualifications Bachelors degree in Computer Science/Engineering or relevant technology & science Technology certifications from any industry leading cloud providers
Posted 1 month ago
5.0 - 7.0 years
5 - 7 Lacs
Gurgaon, Haryana, India
On-site
Maintain, upgrade, and evolve data pipeline architectures to ensure optimal performance and scalability. Orchestrate the integration of new data sources into existing pipelines for further processing and analysis. Keep documentation up to date for pipelines and data feeds to facilitate smooth operations and collaboration within the team. Collaborate with cross-functional teams to understand data requirements and optimize pipeline performance accordingly. Troubleshoot and resolve any issues related to pipeline architecture and data processing. Role Requirements and Qualifications: Experience with cloud platforms for deployment and management of data pipelines. Familiarity with AWS / Azure for efficient data processing workflows. Experience with constructing FAIR data products is highly desirable. Basic understanding of computational clusters to optimize pipeline performance. Prior experience in data engineering or operations roles, preferably in a cloud-based environment. Proven track record of successfully maintaining and evolving data pipeline architectures. Strong problem-solving skills and ability to troubleshoot technical issues independently. Excellent communication skills to collaborate effectively with cross-functional teams.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15459 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France