Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
SKF has been around for more than a century and today we are one of the world’s largest global suppliers of bearings and supporting solutions for rotating equipment. Our products can be found literally everywhere in society. This means that we are an important part of the everyday lives of people and companies around the world. In September of 2024, SKF announced the separation of its Automotive business, with the objective to build two world-leading businesses. The role you are applying for will be part of the automotive business. This means you will have the opportunity to be a part of shaping a new company aimed at meeting the needs of the transforming global automotive market. Would you like to join us in shaping the future of motion? We are now looking for a … Data Engineer, India – Automobile Business Design, build, and maintain the data infrastructure and systems that support SKF VA data needs. By leveraging their skills in data modeling, data integration, data processing, data storage, data retrieval, and performance optimization, this role can help VA manage and utilize their data more effectively. Key responsibilities (or What you can expect in the role) Build an VA data warehouse which is scalable, secured, and compliant using snowflake technologies. This would include designing and developing Snowflake data models Work with Central data warehouse like SDW, MDW, OIDW to extract data and enrich with VA specific customer grouping, program details etc. Data integration: Responsible for integrating data from ERP’s, BPC and other systems into Snowflake, SKF standard DW’s ensuring that data is accurate, complete, and consistent. Performance optimization: Responsible for optimizing the performance of Snowflake queries and data loading processes. Involves optimizing SQL queries, creating indexes, and tuning data loading processes. Security and access management: Responsible for managing the security and access controls of the Snowflake environment. This includes configuring user roles and permissions, managing encryption keys, and monitoring access logs. Maintain existing databases, warehouse solutions addressing support needs, enhancements Troubleshooting etc. Metrics Technical metrics: Data quality for whole of VA BU, data processing time, data storage capacity and systems availability Business metrics: data driven decision making, data security and compliance, cross functional collaboration. Competencies Should have a good understanding of data modeling concepts and should be familiar with Snowflake's data modeling tools and techniques. SQL: Should be expert in SQL. Should be able to write complex SQL queries and understand how to optimize SQL performance in Snowflake. Pipeline Management & ETL: Should be able to design and manage data pipelines on Snowflake and Azure, using ETL/ELT tools (e.g., DBT, Alteryx, Talend, Informatica). Should have a good understanding of cloud computing concepts and be familiar with the cloud infrastructure on which Snowflake operates. Good understanding of data warehousing concepts and be familiar with Snowflake's data warehousing tools and techniques Familiar with data governance and security concepts Able to identify and troubleshoot issues with Snowflake and SKF’s data infrastructure Experience with Agile solution development Good to have – knowledge on SKF ERP systems (XA, SAP, PIM etc.), data related sales, supply chain data, manufacturing. Candidate Profile: Bachelor’s degree in computer science, Information technology or a related field SKF is committed to creating a diverse environment, and we firmly believe that a diverse workforce is essential for our continued success. Therefore, we only focus on your experience, skills, and potential. Come as you are – just be yourself. #weareSKF Some Additional Information This position will be located in Bangalore. For questions regarding the recruitment process, please contact Anuradha Seereddy, Recruitment Specialist, on email anuradha.seereddy@skf.com . About SKF SKF has been around for more than a century and today we are one of the world’s largest global suppliers of bearings and supporting solutions for rotating equipment. With more than 40,000 employees in around 130 countries, we are truly global. Our products are found everywhere in society. In fact, wherever there is movement, SKF’s solutions might be at work. This means that we are an important part of the everyday lives of people and companies around the world. See more, at www.skf.com.
Posted 5 days ago
6.0 years
0 Lacs
India
Remote
Job Title : Technical Lead – ServiceNow HRSD Job Type: Remote Experience Level: 6+ Years Job Summary: We are seeking an experienced ServiceNow HRSD Technical Lead with deep technical expertise in architecting, implementing, and optimizing the Human Resources Service Delivery (HRSD) module within ServiceNow. The ideal candidate will be responsible for leading technical design, solutioning, integrations, and delivery of complex HR workflows while working closely with cross-functional stakeholders, developers, and business users. Key Responsibilities: Technical Solutioning & Architecture Lead the design and development of scalable ServiceNow HRSD solutions, including Employee Center, Lifecycle Events, Case and Knowledge Management. Architect and implement modular and configurable HR services, guided by global process best practices and compliance needs. Customize HR Services, HR Case Management, HR Profiles, Lifecycle Events (onboarding, offboarding, transfers), and Document Management. Platform Development & Configuration Design and develop custom UI pages, Service Catalog items, client scripts, business rules, Script Includes, and Scoped Apps within the HRSD context. Configure Employee Center / Employee Center Pro, HR Agent Workspace, and mobile experiences. Ensure proper access controls and data privacy using HR-specific ACLs, Contextual Security, and Role-based Access. Integrations & Automation Build and manage integrations with external HRIS systems (Workday, SAP SuccessFactors, Oracle HCM), payroll platforms, ID provisioning tools, and document signing platforms using REST/SOAP APIs, Integration Hub, and MID Servers. Design intelligent automation flows for case routing, notifications, and approvals using Flow Designer and Virtual Agent. Analytics & Reporting Develop customized dashboards and reports using Performance Analytics for HR KPIs (case volumes, SLAs, onboarding efficiency, etc.). Enable proactive service delivery through usage tracking and employee feedback analysis. Leadership & Delivery Oversight Lead technical teams through Agile delivery cycles; manage sprint planning, backlog grooming, and peer reviews. Conduct technical workshops and design sessions with stakeholders. Ensure technical documentation, release management, and post-deployment support processes are well established. Review development work for code quality, adherence to platform governance, and HR-specific security protocols. Required Skills & Qualifications: 6+ years of hands-on ServiceNow platform experience, with 2–3+ years specifically in HRSD implementations. In-depth knowledge of HRSD architecture, HR Service Configuration, Lifecycle Events, Document Management, and Case Management workflows. Proficient in JavaScript, Glide APIs, Flow Designer, UI Policy/Actions, and Client-side scripting. Hands-on experience with Employee Center Pro, HR Agent Workspace, Mobile App configurations, and Document Templates. Deep understanding of data privacy, field-level encryption, and HR data separation in ServiceNow. Experience integrating with Workday, SAP SuccessFactors, Active Directory, Okta, or DocuSign. Working knowledge of HR compliance requirements (e.g., GDPR, HIPAA, SOC). Preferred Certifications: ServiceNow Certified System Administrator (CSA) – Mandatory ServiceNow Certified Implementation Specialist – HRSD – Highly Preferred ServiceNow Application Developer – Preferred ITIL v4 Foundation Certification – Advantageous
Posted 5 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Summary We are looking for a highly skilled and adaptable Site Reliability Engineer to become a key member of our Cloud Engineering team. In this crucial role, you will be instrumental in designing and refining our cloud infrastructure with a strong focus on reliability, security, and scalability . As an SRE, you'll apply software engineering principles to solve operational challenges, ensuring the overall operational resilience and continuous stability of our systems. This position requires a blend of managing live production environments and contributing to engineering efforts such as automation and system improvements. Key Responsibilities: Cloud Infrastructure Architecture and Management: Design, build, and maintain resilient cloud infrastructure solutions to support the development and deployment of scalable and reliable applications. This includes managing and optimizing cloud platforms for high availability, performance, and cost efficiency. Enhancing Service Reliability: Lead reliability best practices by establishing and managing monitoring and alerting systems to proactively detect and respond to anomalies and performance issues. Utilize SLI, SLO, and SLA concepts to measure and improve reliability. Identify and resolve potential bottlenecks and areas for enhancement. Driving Automation and Efficiency: Contribute to the automation, provisioning, and standardization of infrastructure resources and system configurations. Identify and implement automation for repetitive tasks to significantly reduce operational overhead. Develop Standard Operating Procedures (SOPs) and automate workflows using tools like Rundeck or Jenkins. Incident Response and Resolution: Participate in and help resolve major incidents, conduct thorough root cause analyses, and implement permanent solutions. Effectively manage incidents within the production environment using a systematic problem-solving approach. Collaboration and Innovation: Work closely with diverse stakeholders and cross-functional teams, including software engineers, to integrate cloud solutions, gather requirements, and execute Proof of Concepts (POCs). Foster strong collaboration and communication. Guide designs and processes with a focus on resilience and minimizing manual effort. Promote the adoption of common tooling and components, and implement software and tools to enhance resilience and automate operations. Be open to adopting new tools and approaches as needed. Required Skills and Experience: Cloud Platforms: Demonstrated expertise in at least one major cloud platform (AWS, Azure, or GCP). Infrastructure Management: Proven proficiency in on-premises hosting and virtualization platforms (VMware, Hyper-V, or KVM). Solid understanding of storage internals (NAS, SAN, EFS, NFS) and protocols (FTP, SFTP, SMTP, NTP, DNS, DHCP). Experience with networking and firewall technologies. Strong hands-on experience with Linux internals and operating systems (RHEL, CentOS, Rocky Linux). Experience with Windows operating systems to support varied environments. Extensive experience with containerization (Docker) and orchestration (Kubernetes) technologies. Automation & IaC: Proficiency in scripting languages (shell and Python). Experience with configuration management tools (Ansible or Puppet). Must have exposure to Infrastructure as Code (IaC) tools (Terraform or CloudFormation). Monitoring & Observability: Experience setting up and configuring monitoring tools (Prometheus, Grafana, or the ELK stack). Hands-on experience implementing OpenTelemetry for observability. Familiarity with monitoring and logging tools for cloud-based applications. Service Reliability Concepts: A strong understanding of SLI, SLO, SLA, and error budgeting. Soft Skills & Mindset: Excellent communication and interpersonal skills for effective teamwork. We value proactive individuals who are eager to learn and adapt in a dynamic environment. Must possess a pragmatic and adaptable mindset, with a willingness to step outside comfort zones and acquire new skills. Ability to consider the broader system impact of your work. Must be a change advocate for reliability initiatives. Desired/Bonus Skills: Experience with DevOps toolchain elements like Git, Jenkins, Rundeck, ArgoCD, or Crossplane. Experience with database management, particularly MySQL and Hadoop. Knowledge of cloud cost management and optimization strategies. Understanding of cloud security best practices, including data encryption, access controls, and identity management. Experience implementing disaster recovery and business continuity plans. Familiarity with ITIL (Information Technology Infrastructure Library) processes
Posted 5 days ago
0.0 - 3.0 years
10 - 15 Lacs
HITEC City, Hyderabad, Telangana
On-site
Job Title: Senior Software Engineer (React.js / Next.js) Location: Hyderabad, Telangana Experience: 4 - 6 years Employment Type: Full-time Education UG: B.Tech/B.E. in Any Specialization PG: Any Postgraduate About the Role We are seeking a highly skilled and experienced Senior Software Engineer (Frontend) to join our product engineering team. The ideal candidate will have a strong focus on building responsive, user-friendly, and scalable web applications using modern front-end technologies, such as ReactJS, NextJS, Ant Design, and JavaScript. You will work closely with cross-functional teams, including designers, back-end developers, and product managers, to create high-quality applications that meet business requirements and provide excellent user experiences. Key Responsibilities: Front-End Development: Design, develop, and maintain dynamic, responsive, and high-performance web applications using ReactJS, NextJS, Ant Design, Redux, MUI, HTML, CSS, and JavaScript. Write clean, maintainable, and efficient code adhering to industry best practices and coding standards. Optimize applications for maximum speed and scalability. Component Architecture & Reusability Develop reusable UI components and libraries for scalability and efficiency. Implement designs using Ant Design or other design frameworks to ensure a consistent look and feel across all applications. User Experience (UX) and Interface (UI) Collaborate with designers and stakeholders to translate design wireframes into functional, interactive applications. Ensure applications are user-friendly and accessible, adhering to UX/UI best practices and accessibility standards (e.g., WCAG compliance). Security & Data Protection Implement robust front-end security practices, such as data encryption, CSRF/XSS protection, and secure authentication flows. Work proactively to identify and mitigate security vulnerabilities within the front-end codebase. Plugin & System-Level Integration Handle seamless integration of third-party plugins and libraries to enhance application functionality. Modify and manage system-level configurations, such as camera handling, microphone usage, and other hardware or browser API-based functionalities. Ensure compliance with privacy regulations when dealing with system-level data or permissions. Collaboration & Communication Work closely with back-end developers to integrate APIs and ensure seamless communication between front-end and back-end systems. Collaborate with product managers to understand requirements and deliver solutions that align with business goals. Mentor junior developers, conduct code reviews, and provide constructive feedback. Testing & Debugging Conduct thorough testing of applications, including UI testing, unit testing, integration testing, and performance testing. Debug and resolve front-end issues promptly. Continuous Improvement Stay up-to-date with emerging trends and technologies in front-end development, ensuring the use of modern tools and practices. Continuously optimize the development process and improve code quality through automation and other tools. Required Qualifications: Technical Skills: Front-End Frameworks: Strong expertise in ReactJS, NextJS, Bootstrap and Ant Design. Web Technologies: Proficient in HTML5, CSS3, JavaScript (ES6+). State Management: Experience with Redux, Context API, or similar state management libraries. APIs: Familiarity with integrating RESTful APIs into front-end applications. Version Control: Proficient with Git and version control workflows. Security: Strong knowledge of front-end security best practices, including authentication, encryption, and secure data handling. Experience: Professional Background: 4-6 years of experience in front-end development, with a proven track record of building and delivering production-grade applications. Responsive Design: Hands-on experience creating responsive and adaptive designs for multiple screen sizes and devices. Cross-Browser Compatibility: Expertise in ensuring cross-browser and cross-platform compatibility. Soft Skills: Strong analytical and problem-solving abilities. Excellent communication skills, both verbal and written. Ability to work independently and in a team environment. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,500,000.00 per year Benefits: Health insurance Schedule: Monday to Friday Supplemental Pay: Performance bonus Experience: React and Next js: 3 years (Preferred) Work Location: In person Application Deadline: 14/08/2025
Posted 5 days ago
9.0 - 17.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Qualification Senior Cloud Engineer/Architect with experiance of Design and implement the overall cloud architecture for the data intelligent platform, ensuring scalability, reliability, and security Role Key Responsibilities Designing and implementing the overall cloud architecture for the data intelligent platform, ensuring scalability, reliability, and security. Evaluating and selecting appropriate AWS services to meet the platform's requirements. Developing and maintaining reusable Infrastructure as Code (IaC) using Terraform to automate the provisioning and management of cloud resources. Implementing CI/CD pipelines for continuous integration and deployment of infrastructure changes. Ensuring that the platform adheres to security best practices and MMC compliance requirements, including data encryption, access controls, and monitoring. Collaborating with security teams to implement security measures and conduct regular audits. Integrating the marketplace with backend services to facilitate service requests and provisioning. Should have experince of working on Databricks Platform. Experience 9 to 17 years Job Reference Number 13048
Posted 5 days ago
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Title: Solutions Architect – Agentic AI Systems & Scalable Platforms Experience : 15+ years Location : Delhi NCR, Bangalore, Pune (Hybrid) Job Summary: We are looking for a highly experienced Solutions Architect (15+ years) to lead the design and implementation of scalable, event-driven AI/ML platforms. This role will focus on building distributed systems, integrating multi-model AI orchestration, ensuring observability, and securing data operations across hybrid cloud environments. The ideal candidate combines deep technical acumen with excellent communication skills, capable of engaging with executive leadership and leading cross-functional engineering teams. Must Have Skills: 15+ years of experience in architecture and software engineering, with deep expertise in distributed systems Core Competencies Distributed System Design: Proven leadership in architecting resilient, scalable platforms for high-concurrency agent orchestration and state management AI/ML System Integration: Strong experience designing AI/ML integration layers with support for multi-model orchestration, fallback strategies, and cost optimization Event-Driven Orchestration: Expertise in implementing event-driven orchestration workflows, including human-in-the-loop decision points and rollback mechanisms Observability Architecture: Hands-on with observability architecture, including monitoring, tracing, debugging, and telemetry for AI systems Security-First Design: In-depth knowledge of zero-trust security architectures, with RBAC/ABAC and fine-grained access control for sensitive operations Technical Proficiencies Programming: Python (async frameworks), TypeScript/JavaScript (modern frameworks), Go Container Orchestration: Kubernetes, service mesh architectures, serverless patterns Real-time Systems: WebSocket protocols, event streaming, low-latency architectures Infrastructure Automation: GitOps, infrastructure as code, automated scaling policies Performance Engineering: Distributed caching, query optimization, resource pooling Platform Integration Skills API Gateway Design: Rate limiting, authentication, multi-provider abstraction Workflow Orchestration: State machines, saga patterns, compensating transactions Frontend Architecture: Micro-frontends, real-time collaboration features, responsive data visualization Persistence Strategies: Polyglot persistence, CQRS patterns, event sourcing Track record of effective collaboration with AI/ML engineers, Data Engineers, Backend Developers, and UI/UX teams on complex platform delivery, a lead by doing attitude towards resolving issues and technical roadblocks. Demonstrated ability to produce architecture diagrams and maintain technical documentation standards Excellent communication and stakeholder management, especially with senior and executive leadership Nice to Have Skills: Experience with real-time systems (WebSockets, event streaming, low-latency protocols) Exposure to polyglot persistence, event sourcing, CQRS patterns Experience with multi-tenant SaaS platforms and usage-based billing models Knowledge of hybrid cloud deployments and cost attribution for AI compute workloads Familiarity with compliance frameworks, audit trail design, and encryption strategies Exposure to frontend architectures like micro-frontends and real-time dashboards Experience with infrastructure as code (IaC) and performance tuning for distributed caching Role & Responsibilities: Architect and lead development of scalable, distributed agent orchestration systems Design abstraction layers for multi-model AI integration with efficiency and fallback logic Develop event-driven workflows with human oversight, compensating transactions, and rollback paths Define observability architecture, including logging, tracing, metrics, and debugging for AI workflows Implement zero-trust and fine-grained security controls for sensitive data operations Create and maintain technical artifacts, including architecture diagrams, standards, and design patterns Act as technical liaison between cross-functional teams and executive stakeholders Guide engineering teams through complex solutioning, issue resolution, and performance optimization Drive documentation standards and ensure architectural alignment across the delivery lifecycle Key Skills: Distributed systems, AI orchestration, Event-driven workflows, Kubernetes, GitOps, Python, Go, TypeScript, Observability, Zero-trust security, Architecture diagrams, Real-time systems, Hybrid cloud, CQRS, Documentation standards
Posted 5 days ago
9.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Qualification Bachelor's degree in Computer Science, Engineering, or related field (or equivalent experience). Minimum of 9 years of experience in cloud architecture, specifically with AWS. Expertise in core AWS services, including EC2, Lambda, S3, VPC, RDS, and others. Hands-on experience with AWS management tools (CloudFormation, CloudWatch, etc.). Solid understanding of networking, security, and storage concepts in AWS. Experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or AWS CDK. Familiarity with DevOps practices and CI/CD pipelines. Strong troubleshooting and problem-solving skills. Excellent communication skills, both written and verbal, with the ability to communicate complex technical concepts to non-technical stakeholders. Preferred Qualifications: AWS Certified Solutions Architect – Associate or Professional. Experience with multi-cloud environments (e.g., Azure, Google Cloud). Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes). Experience with serverless architectures and microservices. Role Cloud Architecture Design: Design scalable, high-availability, and fault-tolerant cloud architectures using AWS services like EC2, S3, Lambda, RDS, VPC, etc. Solution Implementation: Work with development and operations teams to ensure AWS architecture is implemented, deployed, and optimized according to best practices. Cloud Migration: Lead cloud migration projects from on-premises infrastructure to AWS, ensuring smooth transitions with minimal disruptions. Security & Compliance: Ensure that cloud solutions adhere to industry standards and compliance requirements (e.g., GDPR, HIPAA, etc.). Implement robust security measures like encryption, IAM, and multi-factor authentication. Cost Optimization: Continuously monitor and optimize AWS environments to reduce costs and improve efficiency, using tools such as AWS Cost Explorer, Trusted Advisor, and CloudWatch. Technical Leadership: Provide technical leadership, guidance, and mentorship to junior team members, fostering a collaborative and innovative environment. Documentation & Reporting: Develop and maintain comprehensive architecture documentation and regular reports on system performance, security, and cost efficiency. Experience 10 to 12 years Job Reference Number 12828
Posted 5 days ago
7.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
About the Organisation We are one of India’s leading AMISP (Advanced Metering Infrastructure Service Providers), manufacturing over 5 Lac smart energy meters monthly, supported by in-house teams for Design, Development, Validation, Software Engineering, and Managed Software Services. With a turnover of ₹600 Cr and rising, we are expanding into smart water and gas metering solutions. This position is based in Kolkata and offers a unique opportunity to be part of a data-intensive product ecosystem at scale. Position Overview We are looking for a hands-on, technically mature **Lead Data Platform Engineer** who thrives on architecting and optimizing time-series and high-throughput data platforms. You will own the end-to-end database architecture and engineering function with a sharp focus on PostgreSQL (TimescaleDB), data lifecycle performance, and advanced query optimization. Candidates from high-scale, fast-paced environments such as e-commerce, travel-tech, or dynamic startups will find this role familiar and challenging in the right measure. Suggested Designation Lead Data Platform Engineer – PostgreSQL & Big Data Solutions Key Responsibilities Design and optimize scalable PostgreSQL (Time Series) data architectures to manage billions of telemetry records. Develop and maintain high-performance data models, schemas, indexing strategies, and time-series data workflows. Ensure superior performance for time-bound analytics and search queries across large and partitioned datasets. Work with DevOps and cloud engineering teams to provision AWS-native or hybrid DB environments with cost efficiency. Collaborate closely with product and engineering teams to optimize DB interactions, ingestion pipelines, and data lifecycle policies. Champion coding standards, Postgre SQL practices, and peer reviews across the backend data layer. Act as the go-to expert for database architecture decisions and high-availability, failover strategies. Orchestrate & closely work with the Deployment & / or the Solutions teams for Optimisation of Resources as per the Project Needs. Required Skills & Experience 5–7 years of strong PostgreSQL (TimescaleDB) development experience in data-heavy environments. Prior experience as a Database Architect designing data platforms handling high-volume ingestion and query loads. Hands-on expertise in query optimization, indexing, and partitioning strategies. Sound scripting knowledge in SQL, Python or Bash for automation and integration. 1–2 years’ experience working on AWS RDS, Aurora, or equivalent managed DB platforms. Exposure to ElasticSearch, Redis, Kafka, or other supporting high-throughput technologies is a plus. Strong grounding in techniques relevant to large data platforms. Proficiency in schema evolution, data archival techniques, and long-term retention architecture. Good understanding of security, access control, and encryption best practices in cloud-hosted environments. Preferred Background Hands-on developer-oriented DBA, not just a database manager or administrator. Experience in companies like E-commerce, Travel, Logistics, Food Tech industries, allied-Startups etc. Flipkart, Amazon, MakeMyTrip, Yatra, or other high-scale startups preferred. Bachelor’s or Master’s degree in Computer Science, IT, or allied fields from a reputed institution. Values-driven individual with attention to data integrity, performance, and scalability. Authority & Strategic Impact Own the data platform’s performance, uptime, and design direction. Make authoritative calls on data modeling, indexing, and schema management. Collaborate with software architects and customer IT teams for scalable DB strategies. Contribute to the cloud migration and optimization roadmap with cross-functional stakeholders. Mentor junior developers and database engineers within the platform team.
Posted 5 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Backend & MLOps Engineer – Integration, API, and Infrastructure Expert 1. Role Objective: Responsible for building robust backend infrastructure, managing ML operations, and creating scalable APIs for AI applications. Must excel in deploying and maintaining AI products in production environments with high availability and security standards. The engineer will be expected to build secure, scalable backend systems that integrate AI models into services (REST, gRPC), manage data pipelines, enable model versioning, and deploy containerized applications in secure (air-gapped) Naval infrastructure. 2. Key Responsibilities: 2.1. Create RESTful and/or gRPC APIs for model services. 2.2. Containerize AI applications and maintain Kubernetes-compatible Docker images. 2.3. Develop CI/CD pipelines for model training and deployment. 2.4. Integrate models as microservices using TorchServe, Triton, or FastAPI. 2.5. Implement observability (metrics, logs, alerts) for deployed AI pipelines. 2.6. Build secured data ingestion and processing workflows (ETL/ELT). 2.7. Optimize deployments for CPU/GPU performance, power efficiency, and memory usage 3. Educational Qualifications Essential Requirements: 3.1. B.Tech/ M.Tech in Computer Science, Information Technology, or Software Engineering. 3.2. Strong foundation in distributed systems, databases, and cloud computing. 3.3. Minimum 70% marks or 7.5 CGPA in relevant disciplines. Professional Certifications: 3.4. AWS Solutions Architect/DevOps Engineer Professional 3.5. Google Cloud Professional ML Engineer or DevOps Engineer 3.6. Azure AI Engineer or DevOps Engineer Expert. 3.7. Kubernetes Administrator (CKA) or Developer (CKAD). 3.8. Docker Certified Associate Core Skills & Tools 4. Backend Development: 4.1. Languages: Python, FastAPI, Flask, Go, Java, Node.js, Rust (for performance-critical components) 4.2. Web Frameworks: FastAPI, Django, Flask, Spring Boot, Express.js. 4.3. API Development: RESTful APIs, GraphQL, gRPC, WebSocket connections. 4.4. Authentication & Security: OAuth 2.0, JWT, API rate limiting, encryption protocols. 5. MLOps & Model Management: 5.1. ML Platforms: MLflow, Kubeflow, Apache Airflow, Prefect 5.2. Model Serving: TensorFlow Serving, TorchServe, ONNX Runtime, NVIDIA Triton, BentoML 5.3. Experiment Tracking: Weights & Biases, Neptune, ClearML 5.4. Feature Stores: Feast, Tecton, Amazon SageMaker Feature Store 5.5. Model Monitoring: Evidently AI, Arize, Fiddler, custom monitoring solutions 6. Infrastructure & DevOps: 6.1. Containerization: Docker, Podman, container optimization. 6.2. Orchestration: Kubernetes, Docker Swarm, OpenShift. 6.3. Cloud Platforms: AWS, Google Cloud, Azure (multi-cloud expertise preferred). 6.4. Infrastructure as Code: Terraform, CloudFormation, Pulumi, Ansible. 6.5. CI/CD: Jenkins, GitLab CI, GitHub Actions, ArgoCD. 6.6. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins. 7. Database & Storage: 7.1. Relational: PostgreSQL, MySQL, Oracle (for enterprise applications) 7.2. NoSQL: MongoDB, Cassandra, Redis, Elasticsearch 7.3. Vector Databases: Pinecone, Weaviate, Chroma, Milvus 7.4. Data Lakes: Apache Spark, Hadoop, Delta Lake, Apache Iceberg 7.5. Object Storage: AWS S3, Google Cloud Storage, MinIO 7.6. Backend: Python (FastAPI, Flask), Node.js (optional) 7.7. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins 8. Secure Deployment: 8.1. Military-grade security protocols and compliance 8.2. Air-gapped deployment capabilities 8.3. Encrypted data transmission and storage 8.4. Role-based access control (RBAC) & IDAM integration 8.5. Audit logging and compliance reporting 9. Edge Computing: 9.1. Deployment on naval vessels with air gapped connectivity. 9.2. Optimization of applications for resource-constrained environment. 10. High Availability Systems: 10.1. Mission-critical system design with 99.9% uptime. 10.2. Disaster recovery and backup strategies. 10.3. Load balancing and auto-scaling. 10.4. Failover mechanisms for critical operations. 11. Cross-Compatibility Requirements: 11.1. Define and expose APIs in a documented, frontend-consumable format (Swagger/OpenAPI). 11.2. Develop model loaders for AI Engineer's ONNX/ serialized models. 11.3. Provide UI developers with test environments, mock data, and endpoints. 11.4. Support frontend debugging, edge deployment bundling, and user role enforcement. 12. Experience Requirements 12.1. Production experience with cloud platforms and containerization. 12.2. Experience building and maintaining APIs serving millions of requests. 12.3. Knowledge of database optimization and performance tuning. 12.4. Experience with monitoring and alerting systems. 12.5. Architected and deployed large-scale distributed systems. 12.6. Led infrastructure migration or modernization projects. 12.7. Experience with multi-region deployments and disaster recovery. 12.8. Track record of optimizing system performance and cost
Posted 5 days ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Essential Services : Role & Location fungibility At ICICI Bank, we believe in serving our customers beyond our role definition, product boundaries, and domain limitations through our philosophy of customer 360-degree. In essence, this captures our belief in serving the entire banking needs of our customers as One Bank, One Team . To achieve this, employees at ICICI Bank are expected to be role and location-fungible with the understanding that Banking is an essential service . The role descriptions give you an overview of the responsibilities, it is only directional and guiding in nature. About the Role As an Information Security Manager in ICICI Bank you will be responsible for leading and managing the organization’s information security program to ensure the confidentiality, integrity, and availability of data, systems, and networks. This role involves developing, implementing, and maintaining security policies, standards, and procedures, overseeing compliance efforts, and responding to evolving cyber threats. The Information Security Manager works closely with technical teams, business leaders, and external stakeholders to foster a culture of security and effectively mitigate risks. Key Responsibilities Develop and Maintain Security Policies: Create, implement, and regularly update information security policies, procedures, and guidelines aligned with organizational objectives and regulatory requirements. Collaborate: Conduct regular risk assessments and vulnerability analyses to identify, evaluate, and mitigate security risks to the organization’s assets. Monitor emerging threats, security trends, and technologies, regularly recommending adjustments and enhancements to the security program to maintain robust protection. Incident Response: Lead the investigation and response to actual and suspected security incidents, ensuring effective containment, analysis, and communication of findings. Compliance Oversight: Ensure ongoing compliance with all applicable laws, industry standards (e.g., GDPR, PCI DSS, ISO 27001), and internal policies. Coordinate audits and manage remediation of non-compliant areas. Systems & Technology Oversight: Oversee the deployment, configuration, maintenance, and monitoring of security tools such as firewalls, encryption solutions, intrusion detection systems, and access controls. Collaboration: Work with other departments to integrate security into business processes and projects. Communicate risks and security postures to stakeholders and senior management. Vendor and Third-Party Management: Ensure that third-party vendors and partners adhere to organizational security standards and participate in risk assessments as needed. Reporting: Produce detailed reports on the status of information security, audit findings, incidents, and compliance for senior management and governance boards. Qualifications & Skills Educational Qualification: Engineering Graduate in CS, IT, EC or InfoSec, CyberSec or MCA equivalent. Certifications: Certification(s) such as CISSP, CISM, or equivalent are preferred. Compliance: Great Awareness of cyber security trends & hacking techniques. About the Business Group Information Security Group of ICICI Bank believes in providing services to its customers in the safest and secured manner, keeping in mind that data protection for its customers is as important as providing quality banking services across the spectrum. The CIA triad of Confidentiality, Integrity, and Availability is built on the vision of creating a comprehensive information security framework. The Bank also lays emphasis on customer elements like protection from phishing, adaptive authentication, awareness initiatives, and provide easy to use protection and risk configuration ability in the hands of customers. With this core responsibly, ICICI administer and promotes on going campaigns to create awareness among customers on security aspects while banking through digital channels.
Posted 5 days ago
6.0 - 8.0 years
0 Lacs
Noida, Uttar Pradesh, India
Remote
Production Control Analyst_Full-Time_Noida (Remote)_Shift Timing: PST and overlap IST/PST Job Title: Production Control Analyst Job Type: Full-Time Location: Noida (Remote) Experience: 6-8 Years Shift Timing: PST and overlap IST/PST Job Description: Seeking a Production Control Security Analyst that has strong technical knowledge IBM Security Verify Access, IBM Security Verify Governance Identity Manager and IBM Mainframe ZOS RACF products to provide development and support. Production Control Analyst Major Responsibilities: Provide support for day to day operations Support access provisioning / de-provisioning and resolve all access issues on mainframe, web security, SAP and other platforms Support COTS products as it relates to Production Control Security Analyst Must have a good understanding of Windows, Linux and Mainframe systems 3 Years of AD Management and automation with PowerShell Provide support to developers to analyze and resolve production issues. Provide 24/7 on call support for production processing. Review and resolve tickets assigned. Good documentation and communication skills Provide cross training support for other team members 1-2 Years Perl, Python, DOS, Bash scripting experience 1-2 Years Java and JavaScript development expertise as it relates to TDIIBM Security Directory Integrator) 3-5 Years Operational and Maintenance Activities IBM Security Verify Governance Identity Manager (ISVGIM)Version 10.0.1.x Operational Activities Administration Activities Assist and troubleshoot manual creation of users Manual reconciliation or adoption of any target users Management of provisioning and password issue Creation and management of roles and roles to entitlement mapping Management of role owner approver Creation and management of user recertification campaigns Automate manual process wherever possible. Work with technical and business stakeholders to troubleshoot. Ongoing maintenance of the existing IBM environment includes. Monitoring the reconciliation of the target system Monitoring the user feed files Add, modify, update provisioning policies wherever necessary in the application. Create, modify, update SDI(TDI) assembly lines to maintain the data current. Troubleshooting any of the user provisioning and password related processes Troubleshooting any application component failures Troubleshooting role and access provisioning and revocation. Troubleshooting any application component failures Coordinating the application of fix packs to IBM Security Verify Governance Identity Manager and its components as required. Coordinating the production change and migration activities IBM (Tivoli)Security Directory Integrator development and code maintenance. IBM Security Verify Access and Federation Module Operational Activities Administration Activities Create and maintain junctions, groups, ACLs, Objectspace, dynamic URLs, configuration files Provide support for the application team to troubleshoot webproxy related issues. Setup and troubleshoot SSO federation with SAML 2.0 WS-Federation, and OpenID Connect protocols for federated access. Handle federation certificates udates with the clients. Setup and maintain Advanced Access Control (AAC) policiesenabled Mobile Multi Factor Authentication (MMFA) data for all user types. Ongoing maintenance of the existing environment Troubleshooting any of the single sign on and federation related incidents Troubleshooting any application component failures Coordinating the application of fix packs to IBM Security Verify Access and its components as required Coordinating the production change and migration activities Troubleshooting access related issues with application team whenever necessary. Skills that are needed: IBM Security Verify Access (v10.0.3 and above) administration for user and access provisioning. IBM Security Verify Governance Identity Manager (v10.0.1and above) administration for user and access provisioning. IBM Security (Tivoli) Directory Integrator (ISDI) (TDI) IBM Security Directory Server (SDS) IBM Security Verify Access Federation Module Web Sphere Application Server – Network Deployment (WAS-ND) IBM DB2 Database Server Version 11.5 and above (DB2), GSKit, update certificate. Mainframe RACF administration for user and access provisioning Oracle EBS user provisioning SAP ECC 6.0 User Security Administration Knowledge of Certificate generation. Key skills include user account management, group policy management, knowledge of Federation Services (ADFS), LDAP queries, PowerShell scripting, backup and recovery processes, security management, DNS management, and troubleshooting user authentication issues. Automation, particularly through PowerShell scripting, enhances efficiency in user management tasks and reduces the likelihood of human error, which is highly desired in IT roles. Skills that would be nice to have: Must have a good understanding of DOS and Bash scripting. Support PGP encryption and Secure file transfer using Serv-U Microsoft power Apps, Microsoft Power BI SAP Hana User provisioning Knowledge of DNS, Load balancer, TCP/IP concepts
Posted 5 days ago
6.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Notice Period - Immediate to 15 days Experience - 6 to 10 years Location - Bangalore, Hyderabad, Pune, Chennai, Jaipur, Gurgaon Key Responsibilities Design, deploy, and manage scalable and fault-tolerant infrastructure on AWS using services like EC2, Lambda, ECS, RDS, S3, and VPC. Architect and implement multi-region and highly available cloud solutions to support business continuity. Optimize AWS environments for performance, cost, and scalability. Develop and maintain CI/CD pipelines using Jenkins. Implement Infrastructure as Code (IaC) using Terraform, AWS CloudFormation, or CDK. Automate operational tasks, such as monitoring, backups, and patch management. Implement cloud security best practices, including IAM, encryption, logging, and network segmentation. Conduct security assessments of AWS environments and remediate vulnerabilities. Design and enforce policies for secure DevOps workflows (e.g., secure secrets management, least privilege access). Integrate security tools into CI/CD pipelines for automated testing and compliance (e.g., Snyk, AWS Security Hub). Ensure compliance with industry standards (e.g., SOC 2, HIPAA, GDPR) through proper logging, monitoring, and reporting. Set up monitoring and alerting systems using tools like New Relic, CloudWatch or Prometheus. Perform root cause analysis and implement solutions to prevent recurring issues. Design and execute disaster recovery and incident response plans. Work closely with development, operations, and security teams to align objectives and improve workflows. Mentor junior engineers and provide technical leadership on cloud and DevOps practices. Act as a subject matter expert on AWS and security topics within the organization. Technical Skills Strong knowledge of AWS services, including EC2, Lambda, ECS/EKS, S3, CloudFormation, and IAM. Experience with networking concepts in AWS (e.g., VPC, Route 53, Transit Gateway, Direct Connect). Deep understanding of AWS security services, such as AWS WAF, Shield, GuardDuty, Security Hub, and KMS. Expertise in Jenkins. Proficiency with IaC tools like Terraform, AWS CloudFormation, or CDK. Scripting and automation using Python and Bash. Familiarity with cloud-native security concepts, including identity and access management, encryption, and zero-trust architecture. Knowledge of vulnerability scanning, penetration testing, and remediation strategies. Understanding of regulatory compliance requirements (e.g., SOC 2, HIPAA, GDPR, PCI-DSS). Preferred Soft Skills Strong problem-solving and analytical skills. Excellent communication and documentation abilities. Ability to work in a fast-paced, collaborative environment. Leadership and mentorship experience.
Posted 5 days ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: Senior Java Developer Organization: Mobile First Applications Pvt Ltd Experience: 8+ Years Location: Pune Contract: 4+ months(extensible) Job Summary: We are looking for a Senior Java Developer with 8+ years of hands-on experience in Java development, particularly focused on cryptography, security compliance, and scalable system design. The ideal candidate should possess strong technical expertise in Core Java, secure application development, cryptographic implementations, and database systems. As an individual contributor, you will take full ownership of modules, contributing to the design and development of secure, reliable, and high-performance systems. Key Responsibilities: ● Design, develop, and maintain secure, high-performance Java-based applications. ● Implement and manage cryptographic algorithms, ensuring adherence to industry-standard protocols (e.g., AES, RSA, SHA, TLS). ● Apply secure coding and compliance practices to mitigate security threats (e.g., OWASP Top 10). ● Ensure system architecture aligns with Core and MVC patterns, and promote best practices across the team. ● Collaborate with architecture and DevOps teams to embed security-first design principles into the development lifecycle. ● Perform code reviews, threat modeling, and contribute to internal security audits. ● Maintain strong working knowledge of SQL (e.g., MySQL, PostgreSQL) and NoSQL databases (e.g., MongoDB, Cassandra), ensuring optimal data access and storage strategies. ● Keep documentation updated for cryptographic and security processes and mentor junior developers on technical and security aspects. ● Stay current with emerging security technologies, threats, and regulations. Required Skills & Qualifications: ● 8+ years of strong Java development experience. ● Deep knowledge of Java Cryptography Architecture (JCA), Java Security Manager, and encryption protocols. ● Proficiency in Core Java concepts and application of MVC architectural pattern. ● Strong hands-on experience with SQL and NoSQL databases including schema design, indexing, and optimization. ● Knowledge of security standards such as OWASP, PCI-DSS, ISO 27001, or NIST. ● Familiarity with Spring Security, OAuth2, JWT, and SAML. ● Experience working with build tools (Maven/Gradle) and version control systems (Git). ● Exposure to security testing tools like OWASP ZAP, Burp Suite, or Fortify. ● Strong analytical, debugging, and problem-solving skills. ● Ability to work independently as an individual contributor and take ownership of modules. Preferred Qualifications: ● Certifications such as Oracle Certified Java Developer, CISSP, or CEH. ● Experience in regulated industries like finance, banking, or healthcare. ● Knowledge of cloud security (AWS/GCP/Azure), container security, and API security.
Posted 5 days ago
4.0 years
0 Lacs
Mumbai, Maharashtra, India
Remote
We are looking for a Cloud-first IT Administrator with foundational Information Security (InfoSec) skills to manage and secure a modern, distributed IT environment. The ideal candidate will be responsible for overseeing cloud-native infrastructure, end-user devices, identity and access management, and maintaining InfoSec hygiene—driven by an automation-first, tools-based approach rather than traditional manual methods. Key Responsibilities Cloud-based IT Admin Manage Identity & Access Management : Google Workspace Admin or Azure AD or Okta Implement and enforce SSO, MFA policies Manage SaaS platforms used by the company: Google Workspace / Microsoft 365 / Slack / Zoom / Notion / Jira / others Setup and manage MDM (Mobile Device Management) across all endpoints (laptops / mobiles): Example tools: Hexnode, Intune, JAMF, Comodo Enforce security policies — device encryption, patching, antivirus, screen lock, remote wipe Enable self-service onboarding/offboarding — automate account provisioning and deprovisioning Manage asset inventory for cloud and physical devices Setup VPN / Zero Trust Access models where needed Manage basic networking & firewall rules in: Physical office (hardware firewalls like Fortinet / Palo Alto / Ubiquiti) Cloud (AWS Security Groups, NACLs, WAF) InfoSec (Basic / First line) Conduct regular user access reviews and implement least privilege Run basic vulnerability scans on endpoints and cloud systems Implement DLP (Data Loss Prevention) policies where needed Monitor and enforce phishing protection / SPF / DKIM / DMARC Setup endpoint monitoring / EDR tools (ex: CrowdStrike, SentinelOne) Ensure basic compliance tracking for ISO 27001 / SOC2 readiness Conduct InfoSec awareness training for employees (quarterly) AWS & Cloud Infra (Basic Admin) Monitor AWS usage and identify cost saving opportunities Manage AWS IAM users, policies, roles Manage basic AWS services : EC2, S3, RDS, CloudWatch, CloudTrail Assist DevOps team in ensuring secure cloud configurations Preferred Experience with AI-driven IT / InfoSec Tools Experience using or exploring AI-driven MDM platforms (Hexnode AI, Kandji AI, Jamf AI Assist, etc.) Familiarity with AI-assisted Identity Governance tools (Saviynt, Okta AI Assist, etc.) Understanding of AI-based Cloud Cost Optimization tools (CloudZero, OpsAI, AWS Trusted Advisor AI, Harness) Exposure to AI-based email security / DLP platforms (Abnormal Security, Material Security) Experience with AI-assisted VAPT & vulnerability scanning tools (Tenable, Plerion AI, Qualys AI) Familiarity with AI-powered IT Helpdesk platforms (Moveworks, Espressive, Aisera) Willingness to adopt AI-first approach to IT and InfoSec automation Skills & Requirements Mandatory 4+ years experience in Cloud-based IT Admin roles Hands-on experience with: Google Workspace / Azure AD / Okta MDM platforms Cloud networking & firewalls AWS IAM & basic cloud services Basic InfoSec knowledge: Endpoint security DLP Email security
Posted 5 days ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
This is a remote position. MTC is seeking an organized Cloud dev-ops Engineer who is responsible for designing, implementing, and managing scalable, secure, and reliable cloud infrastructure and DevOps pipelines. Collaborate closely with software engineers, security experts, and IT operations to automate deployment processes, monitor systems, and optimize performance. Core Functional Responsibilities: Design, deploy, and manage cloud resources using best practices in AWS, Azure, or GCP. Ensure high availability, scalability, and security of cloud environments. Maintain version-controlled infrastructure configurations for consistency and repeatability. Implement and enforce security best practices (e.g., IAM, encryption, secure networking). Analyze system performance and optimize resource usage and costs. Recommend architectural improvements based on performance metrics. General Responsibilities: Set up monitoring, logging, and alerting systems. Act as a bridge between development and operations teams. Promote a culture of collaboration, continuous improvement, and shared responsibility. Collect and analyze logs for proactive troubleshooting and performance tuning. About Abhyaz: Abhyaz: Leading Talent Management and Internship Platform Abhyaz Form MTAB Technology Center p. ltd, is a premier talent management platform offering internships across 150+ job roles, encompassing both engineering fields and non-engineering disciplines like data science, marketing, HR, finance, and operations. With over 100,000 applicants and more than 3,000 remote interns, Abhyaz boasts an impressive 80% placement rate, highlighting its effectiveness in preparing interns for the job market. Remote Internship Excellence: Abhyaz promotes a remote working culture, utilizing Moodle and Zoho One tools to facilitate seamless communication and project management. Moodle provides a comprehensive learning management system for course materials and assignments, while Zoho One streamlines collaboration and task management. Business Solutions: The platform offers business solutions to manage training operations and remote internships efficiently. By integrating Moodle and Zoho One, Abhyaz ensures smooth training processes and productive remote work environments. Exclusive Talent Pool: Abhyaz features an exclusive talent pool program that simplifies recruitment by connecting employers with a curated selection of skilled professionals, ensuring a streamlined hiring process. Educational Collaborations: Abhyaz partners with educational institutions to deliver dynamic programs that enhance skill development through practical applications. About Abhyaz Internships: Abhyaz Training and Internships is a remote program designed to equip you with the skills and experience you need to succeed in your chosen field. This is your chance to gain valuable hands-on exposure while working on real-world projects. Here's what you can expect: Remote Opportunity: Learn and work from the comfort of your own home. Program Duration: 4-16 weeks, allowing you to tailor the program to your needs. Structured Learning: The first week is dedicated to intensive training designed to develop your professional skills. Real-World Projects: Apply your learnings by working on critical projects alongside experienced professionals. Time Commitment: 25-30 hours per week to ensure you get the most out of the program. Mentorship and Guidance: A dedicated team of mentors will be there to support you throughout the program. Portfolio Building: Showcase your work to potential employers through an online portfolio created by Abhyaz. Weekly Deliverables: Regular project deliveries will help you stay on track and demonstrate your progress. Peer and Supervisor Feedback: Receive valuable feedback to improve your skills and ensure you're meeting expectations. Job Placement Opportunities: Top-performing interns may be offered guidance and support to secure placements with reputable companies. By participating in Abhyaz Training and Internships, you'll gain the skills, experience, and portfolio you need to take the next step in your career. Hiring Process: Step 1: Job Postings on our Career page - Friday Step 2: Call for Registration and Enrolment - Friday Step 3: Completing Portfolio Submissions - Next Thursday Step 4: Evaluation Process ends on Abhyaz platform - Next Thursday Step 5: Internship offer - Friday Step 6: Onboard – Accept our Internship Offer and onboard - Monday Internship Work Timings at Abhyaz Full-Time Interns (11 AM – 5 PM) Must be fully available in the virtual office. Allowed to take scheduled breaks. Part-Time Interns Slot 1: 11 AM – 2 PM Slot 2: 2 PM – 5 PM Interns must be present in the virtual office during their chosen slot. Off-Time Batch (Flexible Work Hours) Must report to the virtual office between 5 PM – 6:30 PM. Work hours outside this period are flexible based on availability. Mentors will be available until 6:30 PM. Interns should provide task updates to the Project Management Executive. Please note: candidates are requested to fill out all the fields in the application form and not to use the easy apply option! Do follow us on Linkedin / Twitter / YouTube Requirements Bachelor’s or Master’s degree in Computer Science, Information Technology, or related fields. Basic understanding of cloud platforms like AWS, Azure, or Google Cloud Platform (GCP). Familiarity with DevOps concepts such as CI/CD, version control (e.g., Git), and automation. Strong organizational and communication skills. Ability to multitask and manage time effectively. Benefits Learn On-Demand SaaS Tools: Gain hands-on experience with industry-standard tools like Moodle and Zoho One, enhancing your tech skills. Out-of-the-Box Work: Engage in innovative projects that go beyond your primary job role, broadening your skill set. Remote Opportunities: Enjoy the flexibility of working from anywhere, making it convenient to balance other commitments. Diverse Project Experience: Work on internal projects as well as real client assignments, providing a well-rounded professional experience. Online Portfolio Building: Develop a strong online portfolio showcasing your work, which can be invaluable for future job applications. Flexible Timing: Benefit from flexible working hours, allowing you to manage your time effectively and maintain a healthy work-life balance. Terms & Conditions apply
Posted 5 days ago
5.0 years
0 Lacs
India
On-site
The key focus for the senior data architect is to perform planning aligned to key data solutions, build and participate the architecture capability building, performdata architecture and design, manage data architecture risk and compliance, provide design and build governance and support and communicate and share knowledge around the architecture practices, guardrails, blueprints and standards related to the data solution design. Describe The Main Activities Of The Job (description) Planning Lead data solution requirements gathering and ensure alignment with business objectives and constraints Define and refine data architecture runways for intentional architecture with the key stakeholders Provide input into business cases and costing Participate and provide data architectural runway requirements into Programme Increment (PI) Planning Architecture Capability Develop and oversee data architecture views and ensure alignment with enterprise architecture Maintain and oversee the data solution artifacts in the set enterprise repository and knowledge portals aligned to the rest of the architecture Manage the data architecture processes based on the requirements for each architype Manage change impact of the data architecture with stakeholders Develop and participate in the build of the data architecture practice with embedded architects and engineers including the relevant methods, repository and tools Manage the data architecture considering the business, application, information/data and technology viewpoints Establish, enforce and implement data standards, guardrails, frameworks, and patterns Solution Design Lead and review logical and detailed data architecture Evaluate and approve data solution options and technology selections Select appropriate technology, tools and build for the solution Oversee and maintain the data solution blueprints Drive incremental modernisation initiatives in the delivery area Risk, Governance and Compliance Identify, assess and mitigate risks at a data solution architecture level Ensure and enforce compliance with policies, standards, and regulations Lead data architecture reviews and integrate with governance functions Integrate with other governance and compliance functions to ensure continuity in managing the investment and risk for the organisation pertaining to the solution architectures Establish and provide data standards, guidance, and tools to delivery teams Implementation and Collaboration Establish and provide data solution architectures and tools to thedelivery and data engineering teams Lead and facilitate collaboration with delivery teams to achieve architecture objectives Manage and resolve deviations and ensure up-to-date data solution design documentation Identify opportunities to optimise delivery of solutions Oversee and conduct post-implementation reviews Ensure the data architecture supports CI/CD pipelines to facilitate rapid and reliable deployment of data solutions Implement automated testing frameworks for data solutions to ensure quality and reliability throughout the development lifecycle Establish performance monitoring and optimisation practices to ensure data solutions meet performance benchmarks and can scale as needed Integrate robust data security measures, including encryption, access controls, and regular security audits, into the implementation process Communication and Knowledge Sharing Communicate and advocate up-to-date data solution architecture views Communicate the relevant data standards, practices, guardrails and tools to stakeholders relevant to the solution design Ensure IT teams are well-informed and trained in architecture requirements Communicate and collaborate with stakeholders' relevant views on planning, technology assessments, risk, compliance, governance and implementation assessments Foster collaboration between data architects, data engineers, and other IT teams through regular cross-functional meetings and agile ceremonies Communicate and maintain up-to-date blueprint designs for key data solutions Ensure effective participation in the agile ceremonies (PI planning, sprint planning, retrospectives, demos) Implement regular feedback loops with stakeholders and end-users to continuously improve data solutions based on real-world usage and requirements Create a culture of knowledge sharing by organising regular workshops, training sessions, and documentation updates to keep all team members informed about the latest data architecture practices and tools Minimum Qualifications/Experience (required For The Job) Matric Degree or diploma in Information Technology, Computer Science, Engineering OR relevant diploma / degree Experience:Requires a minimum of 5 years in a technical/solution design role and a minimum of 7 years relevant IT experience Data Experience: Required a minimum of 7 years related experience in data engineering, data modeling and design and data management and governance Data Related Experience: Big Data and Analytics (e.g., Hadoop, Spark) Data Warehousing (e.g., DataBricks, Snowflake, Redshift) Master Data Management (MDM) Data Lakes and Data Mesh Metadata Management ETL/ELT Processes Data Privacy and Compliance Cloud Data Services Additional Qualifications/Experience (preferred) DAMA-DMBOK TOGAF ArchiMate Cloud Certifications (AWS, Azure) Financial Industry Experience Competencies Required Related attributes and competencies related to architecture: Critical thinking/problem solving Teamwork/collaboration Effective Communication Skills Leadership skills Knowledge and experience in architecture domains Knowledge and experience in architecture methods, frameworks and tools Solution Design Experience Agile Knowledge and Experience Cloud Knowledge and Experience Data related competencies: Data modeling, database design and data governance best practices and implementation Data architecture principles and methodologies Data integration technologies and tools Data management and governance
Posted 5 days ago
0 years
0 Lacs
Mumbai Metropolitan Region
Remote
Role : Database Engineer Location : Remote Notice Period : 30 Days Skills And Experience Bachelor's degree in Computer Science, Information Systems, or a related field is desirable but not essential. Experience with data warehousing concepts and tools (e.g., Snowflake, Redshift) to support advanced analytics and reporting, aligning with the team’s data presentation goals. Skills in working with APIs for data ingestion or connecting third-party systems, which could streamline data acquisition processes. Proficiency with tools like Prometheus, Grafana, or ELK Stack for real-time database monitoring and health checks beyond basic troubleshooting. Familiarity with continuous integration/continuous deployment (CI/CD) tools (e.g., Jenkins, GitHub Actions). Deeper expertise in cloud platforms (e.g., AWS Lambda, GCP Dataflow) for serverless data processing or orchestration. Knowledge of database development and administration concepts, especially with relational databases like PostgreSQL and MySQL. Knowledge of Python programming, including data manipulation, automation, and object-oriented programming (OOP), with experience in modules such as Pandas, SQLAlchemy, gspread, PyDrive, and PySpark. Knowledge of SQL and understanding of database design principles, normalization, and indexing. Knowledge of data migration, ETL (Extract, Transform, Load) processes, or integrating data from various sources. Knowledge of cloud-based databases, such as AWS RDS and Google BigQuery. Eagerness to develop import workflows and scripts to automate data import processes. Knowledge of data security best practices, including access controls, encryption, and compliance standards. Strong problem-solving and analytical skills with attention to detail. Creative and critical thinking. Strong willingness to learn and expand knowledge in data engineering. Familiarity with Agile development methodologies is a plus. Experience with version control systems, such as Git, for collaborative development. Ability to thrive in a fast-paced environment with rapidly changing priorities. Ability to work collaboratively in a team environment. Good and effective communication skills. Comfortable with autonomy and ability to work independently.
Posted 5 days ago
8.0 years
0 Lacs
India
Remote
**This is a 6-months contract. This is a remote role with preference to talents based in SEA/APAC** Our client is a technology solutions firm seeking a senior AWS Solutions Architect with deep pre-sales and delivery experience across cloud readiness, assessment, and migration. The consultant will work closely with the internal sales and delivery teams to lead client engagements from early-stage discovery and architecture design through to cloud implementation. This role is critical in shaping and executing robust migration strategies—particularly for enterprises moving from complex on-prem environments to AWS or transitioning across cloud platforms. You will act as the primary technical advisor and solution architect, providing strategic direction and hands-on guidance throughout the engagement lifecycle, with a focus on AWS solutions and hybrid infrastructure environments. Key Responsibilities Lead AWS Cloud Readiness Assessments: Analyze infrastructure, applications, TCO, licensing, and migration feasibility. Architect End-to-End Cloud Journeys: Design tailored AWS migration strategies using the 7R framework (Rehost, Refactor, etc.). Act as Pre-Sales Technical Lead: Support RFPs, client pitches, solution presentations, and stakeholder alignment. Draft and Present Architecture Designs: Create technical blueprints, hybrid/cloud designs, and compliance-aligned roadmaps. Collaborate with DevOps & Security: Ensure seamless transition and deployment, maintaining security and performance standards. Implement Best Practices: Embed AWS Well-Architected Framework principles in all solutions. Drive Delivery Execution: Oversee solution build-out, liaise with engineering, and troubleshoot post-implementation issues. Ideal Profile 8+ years of experience in AWS architecture and migration projects (on-prem to cloud and cloud-to-cloud). Proven background in both pre-sales and technical delivery roles across complex enterprise environments. Deep understanding of data centers, networking, storage, virtualization, and cloud infrastructure. Strong experience with AWS tools (EC2, RDS, S3, Lambda, CloudFormation, EKS, Direct Connect, VPN, etc.). Skilled in CI/CD tools (e.g., GitLab), Infrastructure as Code (Terraform, CloudFormation), and scripting (Python, Bash). Familiarity with hybrid architectures and security best practices (IAM, encryption, monitoring). Excellent stakeholder management and communication skills for both technical and business audiences. Preferred Certifications & Education AWS Certified Solutions Architect – Professional (required) AWS Certified DevOps Engineer – Professional (preferred) Bachelor's degree in Computer Science, Engineering, or related field Bonus: Certifications in Azure or hybrid cloud platforms
Posted 5 days ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
We enable financial institutions to become digital leaders. As a professional team of global scale, we work with best clients for great and exciting projects, in an environment where we learn amazing things every day. Each code, each voice, each contribution, each challenge, each success is celebrated here. We welcome candidates who share our values, have the skills and are passionate to enjoy our journey to build the digital future of finance, together. About the job: We are looking for a Principal Cloud Engineer who will be building and managing cloud infrastructures through the utilization of advanced technologies. What you will be doing: Design and build secure, highly scalable, multi-tier, single/multi-tenant cloud infrastructure solutions on platforms such as Azure, AWS, or GCP, aligning them with the company's business objectives. Create and manage the cloud solution infrastructures with infrastructure as code (IAC) tools like Terraform, PowerShell, ARM scripts and automating provisioning and deployment processes with CI/CD solutions such as Azure DevOps. Build fully automated CI/CD pipelines, DevOps processes and improve the existing ones according to the latest technologies and project requirements. Implement and maintain robust security practices, including access controls, encryption, and compliance with industry standards and best practices for VeriPark solutions and projects. Set up monitoring, logging, and alerting systems to proactively identify and resolve issues, ensuring optimal performance and availability. Investigate and resolve complex technical issues related to cloud infrastructure and services, providing root cause analysis and preventive measures. Advise pre-sales & technical teams on best practice architectures for cloud solutions. This can be a reactive response to a customer need or request for proposals and includes participating in proof-of-concepts, making solution demonstrations, presentations, and preparing Azure consumption estimates for the proposed cloud solution architectures. Mentor and lead VeriPark project team members and customers to be proficient in delivering cloud-based solutions. Identify, build, drive programs and R&D studies to establish new technical practices and learn private or public preview cutting-edge technologies. Document and share designs, technical best practices/insights with internal teams Maintain and advance technical skills and knowledge, keeping up to date with technology trends and competitive insights on cloud and on-prem technologies. There should not be any preventing factor for traveling abroad (0-30%) What we are looking for: 8+ years of relevant working and Microsoft Technologies experience Bachelors Degree in Computer Science or relevant work experience Experience in building scalable, multi-tier and data-intense systems. Experience in at least one of the following cloud platforms is required, preferably Microsoft Azure. Excellent English skills. What we are offering: Your Way: At VeriPark we believe in the power of talent, no matter where it resides. Design your ideal workspace and achieve the perfect work-life balance. Performance-Linked Bonus: Your hard work doesn't go unnoticed! Enjoy a performance-linked bonus as a testament to your dedication! Rewards Beyond the Job: Enjoy a comprehensive benefits package, including Remote Work Support, Health Insurance, Care Program, and Online Psychological Support. We care you! Birthday Leave, Because You Matter: We value your special moments! Take the day off on your birthday and treat yourself. Global Impact, Cutting-Edge Tech: Immerse yourself in global projects with top-tier clients and stay ahead with cutting-edge technologies. Your skills will shape the future of our industry. Unleash Your Potential: Develop yourself with VeriPark Academy opportunities; webinars, and in-house training sessions. Diverse, Vibrant Community: Be part of a dynamic environment that values diversity and inclusivity. Together Culture: Even in a remote world, we cultivate connections through engaging face-to-face gatherings as well as online fun events. Special information sharing environment where you can update & align yourself.
Posted 6 days ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title : Cloud Data Engineer | Database Administrator | ETL & Power BI | DevOps Enthusiast Job Location : Hyderabad /Chennai Job Type : Full Time Experience : 6+ Yrs Notice Period - Immediate to 15 days joiners are highly preferred About the Role: We are seeking a Cloud Data Engineer & Database Administrator to join our Cloud Engineering team and support our cloud-based data infrastructure. This role focuses on optimizing database operations, enabling analytics/reporting tools, and driving automation initiatives to improve scalability, reliability, and cost efficiency across the data platform. Key Responsibilities: Manage and administer cloud-native databases, including Azure SQL, PostgreSQL Flexible Server, Cosmos DB (vCore), and MongoDB Atlas . Automate database maintenance tasks (e.g., backups, performance tuning, auditing, and cost optimization). Implement and monitor data archival and retention policies to enhance query performance and reduce costs. Build and maintain Jenkins pipelines and Azure Automation jobs for database and data platform operations. Design, develop, and maintain dashboards for cost tracking, performance monitoring, and usage analytics (Power BI/Tableau). Enable and manage authentication and access controls (Azure AD, MFA, RBAC). Collaborate with cross-functional teams to support workflows in Databricks, Power BI, and other data tools . Write and maintain technical documentation and standard operating procedures (SOPs) for data platform operations. Work with internal and external teams to ensure alignment of deliverables and data platform standards. Preferred Qualifications: Proven experience with cloud platforms (Azure preferred; AWS or GCP acceptable). Strong hands-on expertise with relational and NoSQL databases . Experience with Power BI (DAX, data modeling, performance tuning, and troubleshooting). Familiarity with CI/CD tools (Jenkins, Azure Automation) and version control (Git). Strong scripting knowledge ( Python, Bash, PowerShell ) and experience with Jira, Confluence, and ServiceNow . Understanding of cloud cost optimization and billing/usage tracking. Experience implementing RBAC, encryption, and security best practices . Excellent problem-solving skills, communication, and cross-team collaboration abilities. Nice to Have: Hands-on experience with Databricks, Apache Spark, or Lakehouse architecture . Familiarity with logging, monitoring, and incident response for data platforms. Understanding of Kubernetes, Docker, Terraform , and advanced CI/CD pipelines. Required Skills: Bachelor’s degree in computer science, Information Technology, or a related field (or equivalent professional experience). 6+ years of professional experience in data engineering or database administration. 3+ years of database administration experience in Linux and cloud/enterprise environments. About the Company: Everest DX – We are a Digital Platform Services company, headquartered in Stamford. Our Platform/Solution includes Orchestration, Intelligent operations with BOTs’, AI-powered analytics for Enterprise IT. Our vision is to enable Digital Transformation for enterprises to deliver seamless customer experience, business efficiency and actionable insights through an integrated set of futuristic digital technologies. Digital Transformation Services - Specialized in Design, Build, Develop, Integrate, and Manage cloud solutions and modernize Data centers, build a Cloud-native application and migrate existing applications into secure, multi-cloud environments to support digital transformation. Our Digital Platform Services enable organizations to reduce IT resource requirements and improve productivity, in addition to lowering costs and speeding digital transformation. Digital Platform - Cloud Intelligent Management (CiM) - An Autonomous Hybrid Cloud Management Platform that works across multi-cloud environments. helps enterprise Digital Transformation get most out of the cloud strategy while reducing Cost, Risk and Speed. To know more please visit: http://www.everestdx.com
Posted 6 days ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Overview: We are looking for a talented and experienced Application Security Engineer to join our team. The ideal candidate will have a strong understanding of application security standards, tools, and methodologies and will be responsible for conducting security assessments, penetration testing, and vulnerability analysis for web and mobile applications. This role requires hands-on experience with both automated and manual testing tools, familiarity with security mechanisms, and a commitment to improving the overall security posture of the organization. Key Responsibilities: • Conduct security assessments for both web and mobile applications. • Perform vulnerability assessments and penetration tests using tools such as Burp Suite Pro, AppScan, Veracode, Fortify, WebInspect, Acunetix, etc. • Leverage mobile application testing tools like Drozer, Xposed, MobSF, SSLTrustKiller, Frida, apktool, dex2jar, jadx, and IDA for iOS and Android applications. • Conduct thorough testing of APIs to identify security flaws. • Utilize OWASP and SANS standards to guide security practices. • Stay up to date with the latest security testing tools, techniques, and ethical hacking methodologies. • Compile and present risk-based findings to stakeholders, providing detailed reports and suggesting appropriate mitigations. • Provide expertise on penetration testing methodologies, including black box, grey box, and white box testing. • Demonstrate proficiency with common penetration testing tools such as nmap, Wireshark, Kali Linux, Metasploit, OpenVAS, OWSAP ZAP, Accunetix, Nikto, Nessus, and sqlmap. • Assist development teams with implementing penetration tests as part of the Secure Software Development Life Cycle (Secure SDLC). • Create and refine security checklists tailored to organizational needs. • Ensure continuous security improvement by making suggestions for system and process enhancements. • Experience working with SaaS, IaaS, and PaaS environments, helping integrate and optimize security technologies and processes. Skills and Qualifications: • Proficiency with OWASP Top 10 and SANS security standards. • Strong experience in using security assessment tools, including both static (SAST) and dynamic (DAST) application security testing tools. • Hands-on experience with mobile application security testing and mobile-specific vulnerabilities. • Proficient with web technologies such as J2EE, XML, JSON, SOAP, REST, and AJAX. • Basic programming knowledge in Java, JavaScript, and SQL. • Familiarity with encryption, authentication, and authorization techniques for secure software development. • Experience in automating security testing using scripting languages like Python, Bash, or Java. • Knowledge of network security and vulnerability assessment practices. • Experience in Secure Code Review and identifying vulnerabilities in the source code. • Strong understanding of various security techniques and risk assessment processes. Certifications: • Certified Ethical Hacker (CEH) or equivalent certifications related to application security. Desired Competencies: • OWASP, Burp Suite, Web Application Security, Acunetix, Vulnerability Assessment, Network Security, Mobile Application Security. • Proficient in Secure Code Review, Python, Bash, Java, and Automation scripting.
Posted 6 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : React Native Good to have skills : Utilities Retail Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: We are looking for a highly skilled Senior React Native Developer to join mobile engineering team. You will be responsible for building and maintaining high-quality mobile applications using React Native and Expo, with a strong focus on performance, accessibility, and security. You’ll collaborate closely with designers, backend engineers, and product managers to deliver seamless user experiences. Roles & Responsibilities: Develop and maintain mobile applications using React Native and Expo Write clean, maintainable code in TypeScript and modern JavaScript Optimise app performance and responsiveness across platforms Debug and troubleshoot native builds using Xcode (iOS) and Gradle (Android) Implement secure storage and authentication mechanisms (e.g. token storage, biometric encryption, root detection) Build responsive layouts using Flexbox and custom SVG graphics with D3 Develop and test components in isolation using Storybook Integrate local and remote web content within mobile apps Patch JavaScript and native code to resolve compatibility issues during upgrades Ensure accessibility compliance using native tools (VoiceOver, TalkBack, Accessibility Inspector) Collaborate using Git, GitHub, and Azure DevOps Translate Figma designs into functional components using design tokens and shared libraries Integrate with authentication providers such as Azure B2C or similar OAuth IDPs Write and maintain unit tests to ensure code quality Professional & Technical Skills: Experience with C# .NET for backend API development Familiarity with Azure Application Insights for backend debugging Exposure to performance testing tools like JMeter Experience writing and maintaining Azure DevOps pipeline scripts Additional Information: Proven experience delivering production-grade React Native apps Strong understanding of mobile security and accessibility best practices Ability to work independently and collaboratively in a fast-paced environment Passion for clean code, testing, and continuous improvement
Posted 6 days ago
1.0 - 31.0 years
3 - 4 Lacs
Maligaon, Guwahati
On-site
We’re seeking a Full Stack Software Developer who can move fast, use AI tools productively, and help ship a secure and scalable app for both Android and iOS platforms. This is a hands-on engineering role where speed, pragmatism, and independence matter more than credentials. What You’ll Do Build and maintain cross-platform mobile app features using React Native or similar Develop backend APIs and integrations (Firebase, Supabase, Node.js, or Python preferred) Integrate real-time features like video calling, live location tracking, and notifications Use AI coding assistants (e.g. Copilot, ChatGPT) to accelerate development Collaborate with the design and product team to rapidly iterate on features Optimize performance, security, and UX for fast-loading and trustworthy experiences You Might Be a Great Fit If: You’ve built and deployed at least one full-stack app (even if it’s a side project) You’ve used React Native, Flutter, or Kotlin/Swift for mobile development You’re comfortable working with Firebase, Supabase, or backend frameworks You leverage AI tools (like GitHub Copilot or ChatGPT) to ship faster You can independently build features end-to-end — frontend to backend You think in terms of product, not just code Bonus Points For: Working knowledge of WebRTC, 100ms, or Agora (video APIs) Exposure to CI/CD, TypeScript, or mobile testing frameworks Familiarity with security, data encryption, or working on sensitive user apps
Posted 6 days ago
1.0 - 5.0 years
0 Lacs
India
Remote
Pebbling AI 🚀 We orchestrate agents, systems, and data anywhere. Imagine a world where every system, service, and application can work together intelligently that discovering capabilities, coordinating workflows, and executing complex tasks across any boundary. We're making that reality. Our platform handles workflow orchestration, trust and identity, federated discovery, and secure communication. Together, these components create the coordination backbone that enterprises need to unlock the full potential of intelligent systems laying the foundation for the internet of agents. What you'll build: You'll work across our entire platform stack, building the backend systems that power agent coordination at enterprise scale. Secure infrastructure: Authentication, identity, and trust systems - thats our DNA. Orchestration services : Backend APIs and workflow coordination engines. Communication protocols: Messaging, data exchange, and transactions between distributed systems. Discovery systems: Service registries and metadata management. High-performance architecture: Scalable systems handling complex distributed workloads What we need: 1-5 years backend engineering experience with production systems. Strong Development skills (FastAPI, async/await, type hints) - python and rust preferred. Security experience: Auth, encryption, or secure APIs. Database expertise: PostgreSQL, schema design, performance optimisation. Infrastructure knowledge : Docker, CI/CD, monitoring, and observability. Bonus: Experience with distributed systems, message queues, MCP or agent frameworks Why join us: Build the future. Your code will power the next generation of internet. Early team member – Work directly with founders, shape the company. Build it and proud of it. Build core systems – Work on the foundational infrastructure that everything else depends on. Solve hard problems – Real challenges that no one else is tackling. Location: Remote (India) Level: SW2 (1-5 years experience) Compensation: Competitive salary with ESOP Start: Immediate NO BAR ON COLLEGE AND DEGREE, WE VALUE SKILLS NOT CERTIFICATES. Must Important: We value a fast startup culture and creative hacking mindset. CVs are great, but make sure to send us your GitHub, look at our GitHub project, and tell us why you want to build with us. Star ⭐ the Pebbling project on GitHub and find out how you can contribute: https://github.com/Pebbling-ai/pebble Ready to build the future of AI? Let's Pebble! 🚀 Looking forward!!!
Posted 6 days ago
3.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Location: Bengaluru (Client Site) Job Type: Full-time Experience: 3-7 years Notice Period: 0-15 days (immediate joiners preferred) No. of Positions: 1 Lead & 4 Engineers About The Role We are seeking a skilled FPGA Engineer with 37 years of experience in RTL design using Verilog, along with expertise in Xilinx MPSoC platforms, MicroBlaze processor development, and embedded system security aspects such as authentication, encryption/decryption, and certificates. The ideal candidate will play a key role in architecting and implementing secure, high-performance digital logic systems. Requirement Experience band 3-7 years Experience in RTL coding using Verilog Experience on development on Xilinx MPSoC (preferably ZCU 106/104) Hands-on experience with Xilinx Vivado and Vitis Desirable to have experience with MISRA C coding guidelines Desirable to have experience with DO-254 Desirable to have experience with Microblaze Desirable to have experience in security aspects of authentication, certificates, encryption/decryption How to Apply If you are passionate about embedded systems and meet the above requirements, we would love to hear from you. Kindly share your resume at: hr@advantal.net For more information, connect with us at: 91 91312 95441
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough