Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
7.0 years
40 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
7.0 years
40 Lacs
Amritsar, Punjab, India
Remote
Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Req ID: 79262 Location: Bangalore, Karnataka, India Job Title : Technical Support Engineer Team : Technical Customer Experience Centre (TAC) at CommScope Ruckus Role Purpose The Technical Support Engineer (TSE) will be responsible for providing first level of Technical Support for various technologies in Ruckus Wireless products to our Enterprise customers. Key Responsibilities Be the first technical point of contact for the customer Should possess the ability to manage critical (P1) cases soon after the training in Ruckus Products Demonstrate the ability to work with Escalation team and/or engineering teams to manage escalated cases Work closely with SE teams internally on larger networks and more complex issues Identify customer problems/issues and assist customer to resolve issues while consistently providing great Customer Experience Work on complex problems where analysis of situations requires in-depth fault analysis and troubleshooting skills Identify and reproduce customer technical problems in a test/lab environment Work on day-day tickets, follow-up with clients, provide feedback and see problems through to resolution Ensure proper case documentation and closure Generate clear and concise documentation in the form of case notes, technical tips and white papers Contribute to the knowledge base by creating KB articles Notify and discuss with Staff or Principal Engineers on calls and emails that require assistance. Timely handoff (escalation) of cases that require advanced technical investigation by the LTE Team Manage customer expectation and make sure customer is receiving highest quality of service Document customer issues for future reference and build knowledge base of the solutions given to the customer Actively participate in trainings and improve product and process knowledge Strict adherence to Service Level Agreement KPIs Understand the SLA’s and work/align style of working towards meeting them KRAs include: CSAT, Active Backlog, Aged Backlog, Time to Resolve and Time to Close, Escalation % and KB Contribution Required Experience Minimum of 2 years of customer support experience in IP networks, WiFi or related environment Data networking is mandatory, Wireless networking experience is desired. Preferably worked as an engineer TAC Excellent written and oral communication ability, including formal presentation skills to customers, partners, Ruckus accounts and support teams Good understanding and applied knowledge of TCP/IP, IGMP, switching and Routing (Layer 2 & 3 communication), internet protocols including DNS, DHCP, SMTP, VLAN etc. Good understanding and troubleshooting skills with wireless technologies 802.11x, WLAN authentication, encryption, EAP, PSK, Radius, AAA, DNS Good understanding of RF transmission and antenna behavior Technical expertise in troubleshooting and resolving complex Layer 2/3 and/or wireless issues in multi-vendor environments Knowledge on deploying, configuring, supporting, troubleshooting, debugging, and administering the following Wireless LAN products and technologies: Wireless Access Points Wireless Client Associations Wireless Controllers Experience providing support to direct customers, resellers, and field personnel in resolving company product related issues Experience working in a support lab environment for problem replication Experience documenting the sequence of events related to resolving customer technical issues Comfortable with analyzing data traces from protocol analyzers such as Wireshark Bachelor or diploma in a computer related field or equivalent work experience Experience Considered Favorably Working Knowledge of Salesforce and JIRA. Multiple language skills Experience working in (or with) a vendor Relevant industry accreditations/certifications: CWNA, CCNA, JNCIA Communication/work Style Excellent communication/interpersonal skills to clearly and simply articulate ideas, frame problems and offer solutions Ability to understand and analyze customer issues along with good troubleshooting skills Ability to communicate clearly and effectively with clients and peers A belief in ownership with good problem-solving and decision-making skills Must maintain a professional attitude, demeanor and be highly motivated and self-directed Encourages and accepts feedback Self-driven, proactive, team-player Work Schedule Monday through Friday or ‘staggered work week’, i.e. Sunday through Thursday or Tuesday through Saturday and weekend or overnight hours as required. Learn more about how we're on a quest to connect the future and build what's next. Show more Show less
Posted 1 day ago
6.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Req ID: 79261 Location: Bangalore, Karnataka, India Role Purpose Sr.Staff Technical Support Engineer , working in a fast-paced environment, the Sr.Staff Technical Support Engineer will be responsible for providing mostly for cases that are escalated by either by the customer / TSE or Senior TSE for various technologies in Ruckus Wireless products to our Enterprise customers. [ Key Operational Responsibilities Be the first Escalation point of contact for the customer and/or for the TAC TSEs Should posses the ability to manage critical (p1) cases soon after the training in Ruckus Products Demonstrate the ability to work with Escalation team and/or engineering teams to manage escalated cases Demonstrate mentorship ability by working with the TSE and Snr TSE on their backlog. Provide feedback of their performance to managers. Work closely with SE teams internally on larger networks and more complex issues Identify customer problems/issues and assist customer to resolve issues while consistently providing great Customer Experience Work on complex problems where analysis of situations requires in-depth fault analysis and troubleshooting skills Identify and reproduce customer technical problems in a test/lab environment Work on day-day tickets, follow-up with clients, provide feedback and see problems through to resolution Ensure proper case documentation and closure Generate clear and concise documentation in the form of case notes, technical tips and white papers Contribute to the knowledge base by creating KB articles Notify and discuss with Staff or Principal Engineers on calls and emails that require assistance. Timely handoff (escalation) of cases that require advanced technical investigation by the LTE Team Suggest improvements on product quality / features and be proactive in development of product Manage customer expectation and make sure customer is receiving highest quality of service Document customer issues for future reference and build knowledge base of the solutions given to the customer Actively participate in trainings and improve product and process knowledge New hire training and OJT responsibilities is part of the role Understand the SLA’s and work/align style of working towards meeting them Participate in forums and Tektalk exchanges KRAs include: CSAT Active Backlog Aged Backlog Time to Resolve and Time to Close Escalation % KB Contribution Job Requirements Education level: B.Sc. or B.E degree in Computer Science, a related field, or equivalent work experience. Work Experience Minimum of 6-8 years of customer support experience in IP networks, WiFi or related environment. Wireless networking experience is mandatory. Working experience in TAC as an escalation engineer is mandatory. Certifications/Accreditations CWNA / CCNA is a mandatory CWNP (Or equivalent) is an advantage Key Competencies Customer Focus Drive for results Team Player Coaching and Feedback Technical Skills & Knowledge Good understanding of TCP/IP, IGMP, switching and Routing (Layer 2 & 3 communication), internet protocols including DNS, DHCP, SMTP, VLAN etc. CCNA / CWNA – Must or higher knowledge, CWSP would be a Plus Good exposure and working experience with 802.11a/b/g/i standards and knowledge of 802.11n A good understanding of WLAN Security in the areas of authentication, encryption, RADIUS, AAA authentication, EAP, PSK, etc. knowledge on deploying, configuring, supporting, troubleshooting, debugging and administering the following Wireless LAN products and technologies: Wireless Access Points Wireless Client Associations Wireless Controllers A very good understanding of RF transmissions & antenna behavior knowledge on WLAN Security in the areas of authentication, encryption, RADIUS, AAA authentication, EAP, PSK, etc. Hands on experience on protocol analyzers tools such as Wireshark, Ethereal Ruckus Wireless Products Ethernet switching Routing and Data Centers Wifi solutions, Multi-tenant solutions, Hotspot services General Knowledge In The Following Areas Wireless industry and competing products Competitor Switching Products TCP/IP, WAN/LAN IOT Solutions Other Abilities Required Good problem solving and decision-making skills Ability to understand and analyze customer issues along with good troubleshooting skills Ability to communicate clearly and effectively with clients and peers Excellent written & verbal communication skills Excellent inter-personal and teamwork skills Self-driven, proactive, hardworking, team-player Encourages and accepts feedback Exposure of handling international customers Work Schedule Monday through Friday and weekend or overnight hours as required. Travel As required for NPI, PLM interactions etc., Learn more about how we're on a quest to connect the future and build what's next. Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Job Description We are seeking a highly skilled and motivated Google Cloud Engineer to join our dynamic engineering team. In this role, you will be instrumental in designing, building, deploying, and maintaining our cloud infrastructure and applications on Google Cloud Platform (GCP). You will work closely with development, operations, and security teams to ensure our cloud environment is scalable, secure, highly available, and cost-optimized. If you are passionate about cloud native technologies, automation, and solving complex infrastructure challenges, we encourage you to apply.. What You Will Do Design, implement, and manage robust, scalable, and secure cloud infrastructure on Google Cloud Platform (GCP) using Infrastructure as Code (IaC) tools like Terraform. Deploy, configure, and manage core GCP services such as Compute Engine, Kubernetes Engine (GKE), Cloud SQL, Cloud Storage, Cloud Functions, BigQuery, Pub/Sub, and networking components (VPC, Cloud Load Balancing, Cloud CDN). Develop and maintain CI/CD pipelines for automated deployment and release management using tools like Cloud Build, GitLab CI/CD, GitHub Actions or Jenkins. Implement and enforce security best practices within the GCP environment, including IAM, network security, data encryption, and compliance adherence. Monitor cloud infrastructure and application performance, identify bottlenecks, and implement solutions for optimization and reliability. Troubleshoot and resolve complex infrastructure and application issues in production and non-production environments. Collaborate with development teams to ensure applications are designed for cloud-native deployment, scalability, and resilience. Participate in on-call rotations for critical incident response and provide timely resolution to production issues. Create and maintain comprehensive documentation for cloud architecture, configurations, and operational procedures. Stay current with new GCP services, features, and industry best practices, proposing and implementing improvements as appropriate. Contribute to cost optimization efforts by identifying and implementing efficiencies in cloud resource utilization. What Experience You Need Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 6+ years of hands-on experience with C#, .NET Core, .NET Framework, MVC, Web API, Entity Framework, and SQL Server. 3+ years of experience with cloud platforms (GCP preferred), including designing and deploying cloud-native applications. 3+ years of experience with source code management (Git), CI/CD pipelines, and Infrastructure as Code. Strong experience with Javascript and a modern Javascript framework, VueJS preferred. Proven ability to lead and mentor development teams. Strong understanding of microservices architecture and serverless computing. Experience with relational databases (SQL Server, PostgreSQL). Excellent problem-solving, analytical, and communication skills. Experience working in Agile/Scrum environments. What Could Set You Apart GCP Cloud Certification. UI development experience (e.g., HTML, JavaScript, Angular, Bootstrap) Experience in Agile environments (e.g., Scrum, XP) Relational database experience (e.g., SQL Server, PostgreSQL) Experience with Atlassian tooling (e.g., JIRA, Confluence, and Github) Working knowledge of Python Excellent problem-solving and analytical skills and the ability to work well in a team We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Title: Cloud Security Consultant Location: Mumbai Experience: 5+ years Availability: Immediate Joiners Preferred Job Description: We are seeking an experienced Cloud Security Consultant to implement and maintain robust cloud security standards across leading platforms (AWS, Azure, GCP). The candidate must have a deep understanding of cloud provisioning, identity and access management, encryption standards, and network security. Key Responsibilities: Implement Secure Cloud Account & Environment Provisioning Standards (SCAEPS) including: Account/subscription setup protocols Root/owner account security controls Baseline configurations and naming standards Deploy and manage Cloud IAM Technical Baseline (IAMTB) such as: Password policies, RBAC, and MFA enforcement SSO/federation with enterprise identity systems Secure management of service principals and cross-account access Design and implement Network Security Configurations (NSCD) : Secure VPC/VNet design and subnet configurations Routing, firewall, and IDS/IPS configurations Enforce Data Encryption Standards (DETS) : AES-256 encryption and KMS key lifecycle management TLS/SSL configuration and certificate management Apply Cloud Storage Security Configurations (CSSCD) : Prevent public access to storage Encryption and access policy implementation for cloud storage Requirements: Minimum 5 years of experience in cloud security Hands-on experience with AWS/Azure/GCP security best practices Expertise in IAM, encryption, and network architecture Strong knowledge of regulatory standards (e.g., ISO, NIST, CIS) Relevant certifications preferred: AZ-500, AWS Security Specialty, CCSP, etc. Show more Show less
Posted 1 day ago
6.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Purpose Sr.Staff Technical Support Engineer , working in a fast-paced environment, the Sr.Staff Technical Support Engineer will be responsible for providing mostly for cases that are escalated by either by the customer / TSE or Senior TSE for various technologies in Ruckus Wireless products to our Enterprise customers. [ Key Operational Responsibilities Be the first Escalation point of contact for the customer and/or for the TAC TSEs Should posses the ability to manage critical (p1) cases soon after the training in Ruckus Products Demonstrate the ability to work with Escalation team and/or engineering teams to manage escalated cases Demonstrate mentorship ability by working with the TSE and Snr TSE on their backlog. Provide feedback of their performance to managers. Work closely with SE teams internally on larger networks and more complex issues Identify customer problems/issues and assist customer to resolve issues while consistently providing great Customer Experience Work on complex problems where analysis of situations requires in-depth fault analysis and troubleshooting skills Identify and reproduce customer technical problems in a test/lab environment Work on day-day tickets, follow-up with clients, provide feedback and see problems through to resolution Ensure proper case documentation and closure Generate clear and concise documentation in the form of case notes, technical tips and white papers Contribute to the knowledge base by creating KB articles Notify and discuss with Staff or Principal Engineers on calls and emails that require assistance. Timely handoff (escalation) of cases that require advanced technical investigation by the LTE Team Suggest improvements on product quality / features and be proactive in development of product Manage customer expectation and make sure customer is receiving highest quality of service Document customer issues for future reference and build knowledge base of the solutions given to the customer Actively participate in trainings and improve product and process knowledge New hire training and OJT responsibilities is part of the role Understand the SLA’s and work/align style of working towards meeting them Participate in forums and Tektalk exchanges KRAs include: CSAT Active Backlog Aged Backlog Time to Resolve and Time to Close Escalation % KB Contribution Job Requirements Education level: B.Sc. or B.E degree in Computer Science, a related field, or equivalent work experience. Work Experience Minimum of 6-8 years of customer support experience in IP networks, WiFi or related environment. Wireless networking experience is mandatory. Working experience in TAC as an escalation engineer is mandatory. Certifications/Accreditations CWNA / CCNA is a mandatory CWNP (Or equivalent) is an advantage Key Competencies Customer Focus Drive for results Team Player Coaching and Feedback Technical Skills & Knowledge Good understanding of TCP/IP, IGMP, switching and Routing (Layer 2 & 3 communication), internet protocols including DNS, DHCP, SMTP, VLAN etc. CCNA / CWNA – Must or higher knowledge, CWSP would be a Plus Good exposure and working experience with 802.11a/b/g/i standards and knowledge of 802.11n A good understanding of WLAN Security in the areas of authentication, encryption, RADIUS, AAA authentication, EAP, PSK, etc. knowledge on deploying, configuring, supporting, troubleshooting, debugging and administering the following Wireless LAN products and technologies: Wireless Access Points Wireless Client Associations Wireless Controllers A very good understanding of RF transmissions & antenna behavior knowledge on WLAN Security in the areas of authentication, encryption, RADIUS, AAA authentication, EAP, PSK, etc. Hands on experience on protocol analyzers tools such as Wireshark, Ethereal Ruckus Wireless Products Ethernet switching Routing and Data Centers Wifi solutions, Multi-tenant solutions, Hotspot services General Knowledge In The Following Areas Wireless industry and competing products Competitor Switching Products TCP/IP, WAN/LAN IOT Solutions Other Abilities Required Good problem solving and decision-making skills Ability to understand and analyze customer issues along with good troubleshooting skills Ability to communicate clearly and effectively with clients and peers Excellent written & verbal communication skills Excellent inter-personal and teamwork skills Self-driven, proactive, hardworking, team-player Encourages and accepts feedback Exposure of handling international customers Work Schedule Monday through Friday and weekend or overnight hours as required. Travel As required for NPI, PLM interactions etc., Show more Show less
Posted 1 day ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title : Technical Support Engineer Team : Technical Customer Experience Centre (TAC) at CommScope Ruckus Role Purpose The Technical Support Engineer (TSE) will be responsible for providing first level of Technical Support for various technologies in Ruckus Wireless products to our Enterprise customers. Key Responsibilities Be the first technical point of contact for the customer Should possess the ability to manage critical (P1) cases soon after the training in Ruckus Products Demonstrate the ability to work with Escalation team and/or engineering teams to manage escalated cases Work closely with SE teams internally on larger networks and more complex issues Identify customer problems/issues and assist customer to resolve issues while consistently providing great Customer Experience Work on complex problems where analysis of situations requires in-depth fault analysis and troubleshooting skills Identify and reproduce customer technical problems in a test/lab environment Work on day-day tickets, follow-up with clients, provide feedback and see problems through to resolution Ensure proper case documentation and closure Generate clear and concise documentation in the form of case notes, technical tips and white papers Contribute to the knowledge base by creating KB articles Notify and discuss with Staff or Principal Engineers on calls and emails that require assistance. Timely handoff (escalation) of cases that require advanced technical investigation by the LTE Team Manage customer expectation and make sure customer is receiving highest quality of service Document customer issues for future reference and build knowledge base of the solutions given to the customer Actively participate in trainings and improve product and process knowledge Strict adherence to Service Level Agreement KPIs Understand the SLA’s and work/align style of working towards meeting them KRAs include: CSAT, Active Backlog, Aged Backlog, Time to Resolve and Time to Close, Escalation % and KB Contribution Required Experience Minimum of 2 years of customer support experience in IP networks, WiFi or related environment Data networking is mandatory, Wireless networking experience is desired. Preferably worked as an engineer TAC Excellent written and oral communication ability, including formal presentation skills to customers, partners, Ruckus accounts and support teams Good understanding and applied knowledge of TCP/IP, IGMP, switching and Routing (Layer 2 & 3 communication), internet protocols including DNS, DHCP, SMTP, VLAN etc. Good understanding and troubleshooting skills with wireless technologies 802.11x, WLAN authentication, encryption, EAP, PSK, Radius, AAA, DNS Good understanding of RF transmission and antenna behavior Technical expertise in troubleshooting and resolving complex Layer 2/3 and/or wireless issues in multi-vendor environments Knowledge on deploying, configuring, supporting, troubleshooting, debugging, and administering the following Wireless LAN products and technologies: Wireless Access Points Wireless Client Associations Wireless Controllers Experience providing support to direct customers, resellers, and field personnel in resolving company product related issues Experience working in a support lab environment for problem replication Experience documenting the sequence of events related to resolving customer technical issues Comfortable with analyzing data traces from protocol analyzers such as Wireshark Bachelor or diploma in a computer related field or equivalent work experience Experience Considered Favorably Working Knowledge of Salesforce and JIRA. Multiple language skills Experience working in (or with) a vendor Relevant industry accreditations/certifications: CWNA, CCNA, JNCIA Communication/work Style Excellent communication/interpersonal skills to clearly and simply articulate ideas, frame problems and offer solutions Ability to understand and analyze customer issues along with good troubleshooting skills Ability to communicate clearly and effectively with clients and peers A belief in ownership with good problem-solving and decision-making skills Must maintain a professional attitude, demeanor and be highly motivated and self-directed Encourages and accepts feedback Self-driven, proactive, team-player Work Schedule Monday through Friday or ‘staggered work week’, i.e. Sunday through Thursday or Tuesday through Saturday and weekend or overnight hours as required. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
We are seeking a highly experienced AWS Data Solution Architect to lead the design and implementation of scalable, secure, and high-performance data architectures on the AWS cloud. The ideal candidate will have a deep understanding of cloud-based data platforms, analytics, and best practices for optimizing data pipelines and storage. You will work closely with data engineers, business stakeholders, and cloud architects to deliver robust data solutions. Key Responsibilities: 1. Architecture Design and Planning: Design scalable and resilient data architectures on AWS that include data lakes, data warehouses, and real-time processing. Architect end-to-end data solutions leveraging AWS services such as S3, Redshift, RDS, DynamoDB, Glue, and Lake Formation. Develop multi-layered security frameworks for data protection and governance. 2. Data Pipeline Development: Build and optimize ETL/ELT pipelines using AWS Glue, Data Pipeline, and Lambda. Integrate data from various sources like RDBMS, NoSQL, APIs, and streaming platforms. Ensure high availability and real-time processing capabilities for mission-critical applications. 3. Data Warehousing and Analytics: Design and optimize data warehouses using Amazon Redshift or Snowflake. Implement data modeling, partitioning, and indexing for optimal performance. Create analytical models to drive business insights and data-driven decision-making. 4. Real-time Data Processing: Implement real-time data processing using AWS Kinesis, Kafka, or MSK. Architect solutions for event-driven architectures with Lambda and EventBridge. 5. Security and Compliance: Implement best practices for data security, encryption, and access control using IAM, KMS, and Lake Formation. Ensure compliance with regulatory standards like GDPR, HIPAA, and CCPA. 6. Monitoring and Optimization: Monitor performance, optimize costs, and enhance the reliability of data pipelines and storage. Set up observability with AWS CloudWatch, X-Ray, and CloudTrail. Troubleshoot issues and ensure business continuity with automated recovery mechanisms. 7. Documentation and Best Practices: Create detailed architecture diagrams, data flow mappings, and documentation for reference. Establish best practices for data governance, architecture design, and deployment. 8. Collaboration and Leadership: Work closely with data engineers, application developers, and DevOps teams to ensure seamless integration. Act as a technical advisor to business stakeholders for cloud-based data solution Regulatory Compliance Reporting Experience The architect should be able to resolve complex challenges due to the strict regulatory environment in India and the need to balance compliance with operational efficiency. Key complexities include: a) Building data segregation and Access Control capability: This requires in-depth understanding of data privacy laws, Amazon’s global data architecture, and the ability to design systems that can segregate and control access to sensitive payment data without compromising functionality. b) Integrating diverse data sources into Secure Redshift Cluster (SRC) data which involves working with multiple teams and systems, each with its own data structure and transfer protocols. c) Instrumenting additional UPI data elements collaborating with UPI tech teams and a deep understanding of UPI transaction flows to ensure accurate and compliant data capture. d) Automating Law Enforcement Agency (LEA) and Financial Intelligence Unit (FIU) reporting: This involves creating secure, automated pipelines for highly sensitive data, ensuring accuracy and timeliness while meeting strict regulatory requirements. The Architect will be extending from India-specific solutions to serving worldwide markets. Complexities include: a) Designing a unified data storage and compute architecture requiring harmonizing diverse tech stacks and data logging practices across multiple countries while considering data sovereignty laws and cost implications of cross-border data transfers. b) Setting up comprehensive datamarts covering metrics and dimensions involving standardizing metric definitions across markets, ensuring data consistency, and designing for scalability to accommodate future growth. c) Enabling customer segmentation across power-up programs that requires integrating data from diverse programs while maintaining data integrity and respecting country-specific data usage regulations. d) Managing time zone challenges :Synchronizing data across multiple time zones requires innovative solutions to ensure timely data availability without compromising completeness or accuracy. e) Navigating regulatory complexities: Designing systems that comply with varying and evolving data regulations across multiple countries while maintaining operational efficiency and flexibility for future changes. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About SAS Partners & the Opportunity SAS Partners is a trusted advisory and services partner for Indian and global companies, offering solutions across corporate compliance, regulatory frameworks, finance, and operations. As part of our digital transformation and ISO 27001 alignment, we are strengthening our internal technology infrastructure. We are looking for an IT Administrator to lead the management of our IT systems, endpoint controls and security, Zoho One applications and user support. This is a critical role that bridges daily tech operations and strategic IT governance, enabling efficiency and resilience across the organization. Key Responsibilities 1. IT Infrastructure & Security (Including Endpoint Controls) Manage the company’s IT assets – hardware, software, networks, servers, and cloud tools. Deploy and monitor endpoint protection (e.g., Trend Micro or equivalent) for device control, patching, and malware protection. Enforce endpoint policies using tools like Zoho UEM or similar. Ensure secure backups, system uptime, and readiness for disaster recovery. Implement and monitor IT security controls aligned with ISO 27001:2022. Conduct periodic internal audits and security assessments. Familiarity with SASE and zero-trust architectures is a plus. 2. User Support & Training Provide hands-on support to users for hardware, software, network, and access issues. Conduct regular training sessions for employees on Zoho tools and IT practices. Maintain SOPs, user guides, and internal IT documentation. 3. Zoho One Administration Manage, configure, and optimize Zoho One apps including CRM, Books, People, Desk, Projects, Contracts, and WorkDrive. Customize automation, workflows, dashboards, and reports. Integrate Zoho applications internally and with third-party tools. Support adoption and troubleshooting across departments. 4. IT Optimization & Software Tools Evaluate and implement tools to improve efficiency, security, and scalability. Track usage, manage licenses, and ensure compliance with software terms. Assist in process automation initiatives across business functions. 5. Vendor & Procurement Management Coordinate with vendors for procurement, AMC renewals, and tech support. Track service contracts, performance, and ensure SLA compliance. Requirements Requirements Qualifications and Skills Strong understanding of IT security frameworks including endpoint controls, encryption, and firewall management. Experience with Unified Endpoint Management (Zoho UEM or similar), and endpoint protection tools like Trend Micro. Familiarity with cloud technologies and SaaS-based tools. Hands-on experience with Zoho One suite (CRM, Books, Desk, People, Projects, Contracts, etc.). Proven experience in IT administration Strong problem-solving ability and effective troubleshooting skills. Excellent communication skills to support non-technical staff and conduct user training. Awareness of IT governance, GDPR, and data protection regulations is preferred. Educational Requirements Bachelor's degree in Computer Science, Information Technology, or related field. Relevant certifications (e.g., CompTIA Network+, Zoho Certified Administrator, ITIL) are desirable. Preferred Experience 3–5 years of experience in IT administration roles. Prior experience in IT for small to mid-sized organizations. Exposure to ISO 27001 technical controls and secure IT practices. Show more Show less
Posted 1 day ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description We’re looking for a highly skilled UI Developer with a strong background in building native applications across Windows, macOS, iOS, and Android platforms. This role requires hands-on expertise in platform-specific development tools and languages, such as C/C++, WinAPI, Cocoa, Swift, Kotlin, and Android NDK, to deliver intuitive, high-performance user interfaces tailored to each ecosystem. The ideal candidate also brings a strong focus on security, with the ability to integrate features like hardware-backed encryption, secure boot, and multi-factor authentication into consumer-facing applications. You’ll play a critical role in creating seamless, secure user experiences across desktop and mobile devices. Requirements Bachelor’s degree in Computer Science, Engineering, or a related field. Proven experience developing native applications for Windows and at least two additional platforms (macOS, iOS, Android). Proficient in C/C++ and platform-native development frameworks such as WinAPI, Cocoa, Swift, Kotlin, and Android NDK. Proven experience designing and building commercial-grade endpoint software at scale, with a strong emphasis on reliability, performance, and maintainability across diverse hardware and operating system environments Strong knowledge of TPM, Secure Enclave, and Android Keystore, with experience integrating these technologies for secure storage and authentication. Hands-on experience with cryptographic libraries such as OpenSSL, CryptoAPI, and CommonCrypto. Familiarity with authentication protocols like OAuth2.0, OpenID Connect, FIDO, and optionally Kerberos, SAML, and LDAP. Deep understanding of Windows and macOS internals, including system architecture, low-level APIs, and built-in security features such as BitLocker, User Account Control (UAC), Windows Defender, and macOS System Integrity Protection (SIP), Gatekeeper, and FileVault. Experience using mobile security testing tools such as AppScan, Burp Suite, or Mobile Security Framework (MobSF). Strong attention to detail with a passion for writing secure, efficient, and maintainable code. Excellent communication skills and a collaborative mindset, with the ability to mentor and inspire peers. Responsibilities As a member of the software engineering division, you will assist in defining and developing software for tasks associated with the developing, debugging or designing of software applications or operating systems. Provide technical leadership to other software developers. Specify, design and implement modest changes to existing software architecture to meet changing needs. Key Responsibilities Design and develop secure, high-performance native user interfaces for Windows, macOS, iOS, and Android platforms. Leverage platform-specific technologies (e.g., WinAPI, Cocoa, Swift, Kotlin, Android NDK) to deliver responsive, intuitive UI experiences. Integrate hardware-backed security features including Trusted Platform Module (TPM), Apple Secure Enclave, and Android Keystore for secure boot, attestation, and encrypted storage. Implement cryptographic algorithms and secure communication protocols to protect data at rest and in transit. Build and support robust authentication mechanisms, including MFA, biometrics (Face ID, Touch ID, fingerprint), and token-based access. Collaborate with security architects and engineers to define and implement secure software architecture. Conduct code reviews, threat modelling, and security assessments to proactively identify and address vulnerabilities. Stay informed on emerging threats, CVEs, and platform security updates, ensuring applications are always a step ahead. Partner closely with product managers, UX designers, and backend engineers to deliver cohesive, high-quality apps on time. Mentor junior developers in secure coding practices, cryptography, and platform-specific development techniques. Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About Mytholog Innovations At Mytholog Innovations, we turn visionary ideas into robust digital realities. Leveraging deep expertise in backend development, microservices, cloud-native architectures, and DevOps, we partner with clients to design scalable systems, enhance existing infrastructures, and deliver impactful engineering solutions. Our rigorous talent screening and commitment to excellence empower companies to build high-performing tech teams that drive sustained innovation and business growth. Job Description We’re seeking a Senior Java Developer with hands-on experience in processing large volume of Kafka/RabbitMQ events , integrating complex third-party systems, and building rule-based engines using Drools . The ideal candidate is a backend powerhouse with a deep understanding of distributed systems, security-first architectures, and scalable integration patterns. If you're passionate about high-performance systems, clean architecture, and secure, maintainable code — this opportunity will challenge and reward you. Location: Remote Employment Type: Full-Time (Contract) Experience: Minimum 5 years Probation: 15 days Note: We are only looking for individual contractors. No agencies please. This is a full-time contractor role. It does not include traditional employee benefits (insurance, PF, etc.). Standard TDS will be deducted from payments, and tax filing is the contractor’s responsibility. Key Responsibilities Develop, and maintain high-throughput event-driven systems using RabbitMQ/Kafka . Design and implement Drools-based business rule engines for dynamic decision-making. Build secure, performant Java Spring Boot microservices with clear boundaries and responsibilities. Develop the integration of external systems/APIs with attention to reliability, fault tolerance, and retries. Implement and enforce strong security practices (authentication, authorization, encryption). Own and optimize event consumption patterns, consumer group management, dead-letter handling, and backpressure control. Requirements 5+ years of hands-on backend development experience in Java (Spring Boot) . Proven ability to process high volumes of Kafka/RabbitMQ messages at scale (multi-million/day range). Deep knowledge of event-driven architecture , distributed systems, and asynchronous processing. Proficiency with Drools or similar rule engines for dynamic business logic. Strong background in secure API development , OAuth2 , JWT , and data encryption techniques. Hands-on experience integrating third-party systems and APIs with resilience patterns. Familiarity with cloud-native deployment practices , Docker , and CI/CD workflows . Strong debugging, profiling, and performance-tuning capabilities. Excellent communication skills to interface with both technical and non-technical stakeholders. Flexibility to work aligned with client time zones. Bonus: Exposure to Resilience4j , WebFlux , or reactive programming . Performance Evaluation Plan Days 1–15: Probation & Onboarding Deliver a sample Kafka/RabbitMQ consumer with metrics, retries, and logging. Submit a technical assessment of the current integration or rule setup. Demonstrate ownership and proactive communication with the team. Days 16–30: Production Integration & Rule Logic Deliver a real-world event processor integrated with at least one external system. Implement business logic using Drools with full test coverage and documentation. Conduct a peer review or propose optimization to an existing event flow. Days 31–45: Scale, Secure, and Own Release a critical, production-grade event consumer or integration module. Patch a key security vulnerability or performance bottleneck. Establish yourself as a reliable backend expert across ongoing initiatives. Benefits 🏡 Fully Remote — Work from your preferred location 🌍 Global Exposure — Collaborate with fast-moving startups worldwide 🤝 Supportive Culture — Transparent, collaborative, and growth-oriented team 🎓 Certification Support — Timely reimbursement programs to boost your credentials 🚀 Performance-Focused Growth — Advancement based on impact, not tenure Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Title: AWS Cloud Engineer Experience Required: 3–5 Years Location: On-Site (Gurugram) Job Type: Full-Time Job Summary: We are looking for a skilled and proactive AWS Cloud Engineer with 3–5 years of hands-on experience in cloud computing to join our IT team. The successful candidate will be responsible for designing, implementing, and maintaining cloud infrastructure on Amazon Web Services (AWS) to support our scalable and secure digital platforms. Key Responsibilities: Design, deploy, and manage cloud infrastructure using AWS services such as EC2, S3, RDS, Lambda, CloudFormation, VPC, and more. Implement and maintain Infrastructure as Code (IaC) using tools like Terraform, AWS CloudFormation, or AWS CDK. Monitor and improve cloud system performance, reliability, and cost efficiency using tools such as AWS CloudWatch, CloudTrail, and Trusted Advisor. Set up and manage CI/CD pipelines using Jenkins, AWS CodePipeline, or GitLab CI/CD. Ensure cloud infrastructure is secure and compliant with security standards through proper configuration of IAM roles, policies, security groups, and encryption. Collaborate with development, DevOps, and security teams to optimize cloud architecture and support application deployments. Automate operational tasks to improve system performance and reduce manual interventions. Troubleshoot cloud-based issues and provide 2nd/3rd level support for AWS environments. Keep up to date with AWS service updates and recommend new solutions where appropriate. Required Skills: Strong experience with AWS core services (EC2, S3, RDS, IAM, VPC, etc.). Proficiency in scripting and automation (Python, Bash, or PowerShell). Experience with Docker and container orchestration (ECS, EKS, or Kubernetes). Good understanding of networking concepts like VPNs, subnets, routing, and firewalls in AWS. Familiarity with version control tools (Git, GitHub, Bitbucket). Strong problem-solving and troubleshooting abilities. Experience working in Agile/Scrum development environments. Preferred Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. AWS Certification (e.g., Solutions Architect Associate, SysOps Administrator, or DevOps Engineer). Experience with monitoring tools (Datadog, Prometheus, ELK Stack, or similar). Exposure to hybrid or multi-cloud environments. Knowledge of serverless computing (AWS Lambda, API Gateway, DynamoDB). Soft Skills: Excellent communication and collaboration skills. Strong analytical thinking and attention to detail. Ability to manage tasks independently and drive initiatives forward. A passion for learning and staying current with cloud technologies. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
What drives us? Imagine this: a single tap on your phone unlocks a world where car ownership is seamless and stress-free. From finding the perfect car to maintaining it and eventually selling it, CARS24 is redefining every step of the car ownership journey. Our mission is simple—let our customers enjoy the thrill of the open road while we take care of everything else. With cutting-edge technology, data science, and customer insights, we’re building the ultimate Super App for car ownership. Already one of the world’s largest auto-tech companies, we’re only just beginning. What will you drive? Collaborate Across Teams: Work closely with all CARS24 teams to help them design and build secure systems and applications. Secure Architecture: Guide teams in creating secure software and infrastructure by recommending safe data storage, secure APIs, encryption, network protections, and robust system designs. Authentication & Access: Help teams implement strong authentication and authorization—like multi-factor authentication, secure password policies, and safe API access controls. Security Reviews & Testing: Conduct code reviews, architecture reviews, penetration testing, risk assessments, vulnerability scans, and threat modeling. Infrastructure Security: Assess and secure production, corporate, and cloud environments (AWS, GCP, Azure). Security Tooling & Automation: Build and maintain tools, scripts, and systems that automate security checks and help prevent security issues at scale. Monitoring & Detection: Set up and improve systems that detect and alert on possible attacks, abnormal activity, or data leaks that help teams respond quickly. Policy & Governance: Develop and manage central security policies and guidelines for cloud and on-prem infrastructure. Promote DevSecOps Culture: Advocate and enable security to be part of the development process from the start, so secure code and systems are everyone’s responsibility. Security Community & Brand: Represent CARS24 in the security community (such as through bug bounty programs, security blogs, etc.), and share best practices internally and externally. Who are we looking for? 2–5 years of hands-on experience in application/infrastructure security, DevSecOps, or related roles (SDE - 1 / SDE - II). Strong knowledge of AWS and/or GCP security concepts and cloud environments. Experience with secure code reviews, vulnerability assessments, and penetration testing. Proficiency in at least one scripting or programming language (Python, Go, Bash, etc.). Familiarity with security automation, monitoring tools, and best practices for incident detection and response. Understanding of modern authentication, authorization, and encryption mechanisms. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Passion for building secure systems, and a proactive, ownership-driven mindset. Show more Show less
Posted 1 day ago
3.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Work experience: 3-6 years Budget is 7 Lac Max Notice period: Immediate to 30days. Linux Ø Install, configure, and maintain Linux servers (Red Hat, CentOS, Ubuntu, Amazon Linux). Ø Linux OS through Network and Kick Start Installation Ø Manage system updates, patch management, kernel upgrades. Ø Create and manage user accounts, file systems, permissions, and storage. Ø Write shell scripts (Bash, Python) for task automation. Ø Monitor server performance and troubleshoot hardware/software issues. Ø Handle incident management, root cause analysis, and preventive maintenance. Ø Implement and manage backup solutions (rsync, cron jobs, snapshot backups). Ø Harden servers by configuring firewalls (iptables, firewalld), securing SSH, and managing SELinux. Ø Configure and troubleshoot networking services (DNS, DHCP, FTP, HTTP, NFS, Samba). Ø Work on virtualization and cloud technologies (AWS EC2, VPC, S3, RDS basics if required). Ø Maintain detailed documentation of system configuration and procedures. Ø Implement and configure APACHE & Tomcat web server with open SSL on Linux. Ø SWAP Space Management. Ø LVM (extending, reducing, removing and merging), Backup and Restoration. Amazon Web Services Ø AWS Infrastructure Management : Provision and manage cloud resources like EC2, S3, RDS, VPC, IAM, EKS, Lambda. Ø Cloud Architecture : Design and implement secure, scalable, and reliable cloud solutions. Ø Automation and IaC : Automate deployments using tools like Terraform, CloudFormation, or AWS CDK. Ø Security Management : Configure IAM roles, security groups, encryption (KMS), and enforce best security practices. Ø Monitoring and Optimization : Monitor cloud resources with CloudWatch, X-Ray, and optimize for cost and performance. Ø Backup and Disaster Recovery : Set up data backups (S3, Glacier, EBS snapshots) and design DR strategies. Ø CI/CD Implementation : Build and maintain CI/CD pipelines using AWS services (CodePipeline, CodeBuild) or Jenkins, GitLab,GitHub. Ø Networking : Manage VPCs, Subnets, Internet Gateways, NAT, VPNs, Route53 DNS configurations. Ø Troubleshooting and Support : Identify and fix cloud resource issues, perform root cause analysis. Ø Migration Projects : Migrate on-premises servers, databases, and applications to AWS. Windows Server and Azure: Ø Active Directory: Implementation, Migration, Managing and troubleshooting. Ø Deep knowledge on DHCP Server Ø Deep knowledge in Patch management Ø Troubleshooting Windows operating System Ø Decent knowledge in Azure (Creation of VMs, configuring network rules, Migration, Managing and troubleshooting) Ø Deep knowledge in VMware ESXi (Upgrading the server firmware, creation of VMs, Managing backups, monitoring etc) Networking: Ø Knowledge on IP Addressing, NAT, P2P protocols, SSL and IPsec VPNS etc Ø Deep knowledge in VPN Ø Knowledge in MVoIP, VMs, SIP PRI and Lease Line. Ø Monitoring the Network bandwidth and maintaining the stability Ø Configuring Switch and Routers Ø Troubleshooting Network Devices Ø Must be able to work on Cisco Meraki Access Point devices Firewall & Endpoint Security: Ø Decent knowledge in Fortinet Firewalls which includes creating Objects, Routing, creating Rules and monitoring etc. Ø Decent knowledge in CrowdStrike Ø Knowledge in Vulnerability and assessment Office365 Ø Deep knowledge in Office365 (Creation of mail, Backup and archive, Security rules, Security Filters, Creation of Distribution list etc) Ø Knowledge in MX, TX and other records Ø Deep knowledge in Office365 Apps like Teams, Outlook, Excel etc Ø SharePoint management Other Tasks: Ø Hardware Servicing Laptops and desktops Ø Maintaining Asset inventory up to date. Ø Managing the utility invoices. Ø Handling L1 and L2 troubleshooting Ø Vendor Management Ø Handling application related issues Ø Website hosting and monitoring Ø Tracking all Software licenses, Cloud Service renewal period and ensue they are renewed on time. Ø Monitoring, managing and troubleshooting servers. Ø Knowledge in NAS Ø Knowledge in EndPoint Central tool and Ticketing tool. Show more Show less
Posted 1 day ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Us: Fortanix is a dynamic start-up solving some of the world's most demanding data protection challenges for companies and governments around the world. Our disruptive technology maintains data privacy across its entire lifecycle -- at rest, in motion, and in use across any enterprise IT infrastructure -- public cloud, on-premise, hybrid cloud, and SaaS. With key strategic partners like Microsoft, Intel, ServiceNow, and Snowflake, Fortanix customers like PayPal, Google & Adidas are reaping the benefits. Recognized by Gartner as a "Cool Vendor", Fortanix is revolutionizing cyber security. Join the revolution! Why work with us? We're seeking passionate people to work with us to change the very idea of how people use cloud computing. We take pride in making Fortanix a great place to work. Coworkers recognize that great ideas can come from anyone, and everyone is encouraged to jump in, contribute, and ask questions. In tackling the hardest problems, we believe that working together will produce better solutions. We are looking for a Software Engineer Intern who wants to learn about hybrid Cloud solutions, to join our R&D org. What you'll do (Duties and Responsibilities): Contribute to all aspects of the deployment and automation process, including: research, implementation, testing and documentation. Participate in monitoring and maintenance of current infrastructure Improve Continuous Integration/Continuous Delivery tools, processes and procedures Contribute to a friendly and helpful company culture Design, implement, test, and maintain one or more of: The backend of our cloud security platform, written in Rust, C Our Runtime Encryption® software for SGX, written in Rust and C Help deploy, monitor, and tune the performance of our software Follow security best practices (don't worry, we'll tell you what they are) Other comparable work from time to time as instructed Requirements What you'll need (Basic Qualifications) Understanding of software security principles, secure software development, and best practices for security Must have knowledge in Rust, C/C++, Java, JavaScript, Python, or Go Experience in software deployment tools such as Kubernetes or Docker Enrolled on a Bachelors' degree in Computer Science or similar field Strong debug skills, effective verbal and written communication skills, team oriented Experience developing cloud software services and an understanding of design for scalability, performance and reliability. Excellence in technical communication with peers and non-technical people. Excellent communication skills and high English proficiency Preferred Technical And Professional Expertise Knowledge in basic Cloud Architecture Experience in Rust and/or C++ Benefits Fortanix is an equal opportunity employer that celebrates diversity and is committed to creating an inclusive workplace with equal opportunity for all applicants and teammates. Our goal is to recruit the most talented people from a diverse candidate pool regardless of race, color, religion, age, gender, gender identity, sexual orientation, or any other status. If you're interested in working in a fast-growing, exciting working environment - we encourage you to apply ! Show more Show less
Posted 1 day ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Linux Administrator Experience: 6+ Years Location: Chennai Mandatory: Linux, Unix, GCP, AWS JD: Experience: o 8+ years of experience in cloud security, with a focus on enterprise product software in the cloud. o At least 3+ years of hands-on experience with major cloud platforms (AWS, Microsoft Azure, or Google Cloud Platform). o Proven experience with securing enterprise software applications and cloud infrastructures. o Strong background in securing complex, large-scale software environments with a focus on infrastructure security, data security, and application security. o Hands-on experience with the OWASP Top 10 and integrating security measures into cloud applications. o Experience with Hybrid Cloud environments and securing workloads that span on-premises and public cloud platforms. Technical Skills: o In-depth experience with cloud service models (IaaS, PaaS, SaaS) and cloud security tools (e.g., AWS Security Hub, Azure Security Center, GCP Security Command Center). o Expertise in securing enterprise applications, including web services, APIs, and microservices deployed in the cloud. o Strong experience with network security, encryption techniques, IAM policies, security automation, and vulnerability management in cloud environments. o Familiarity with container security (Docker, Kubernetes) and serverless computing security. o Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or similar tools. o Knowledge of regulatory compliance requirements such as SOC 2, GDPR, HIPAA, and how they apply to enterprise software hosted in the cloud. Show more Show less
Posted 1 day ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Yularatech is an IT Consulting & Services Company offering high quality end-to-end IT solutions to partner clients. We specialize in IT consulting, IT skills resourcing, and outsourced end-to-end application development and support. Position Summary We are seeking a highly experienced Senior QA to independently handle manual testing for critical payment systems. This role requires deep expertise in payment processing, transaction validation, compliance testing, and defect management. The candidate will be fully responsible for ensuring the accuracy, security, and reliability of payment transactions, from initiation to settlement. This is a hands-on manual testing role—no automation skills are mandatory though an added advantage, but strong domain knowledge and testing expertise are mandatory. The selected individual will lead the end-to-end testing efforts, ensure regulatory compliance, and work closely with cross-functional teams to uphold the highest quality standards. If you are passionate about ensuring the highest standards of quality in every product you test, and have an innovative mindset that will contribute to continuous improvement in QA practices, PLEASE APPLY. Key Skills & Qualifications required · Strong expertise in financial/payment systems: Deep understanding of payment processing, authorization flows, settlements, chargebacks, and financial reconciliation. · Standards Expertise (AS2805 & ISO8583) : Must have hands-on testing experience with AS2805 and ISO8583 payment message formats. · Scheme Compliance (Visa, Mastercard, Others): Proven experience in executing and validating scheme compliance tests for major card networks. · Simulator Proficiency (FINsim, OTS, Paysim): Practical knowledge of payment simulators for test execution and transaction validation. · End-to-End Authorization Flow Understanding: Strong grasp of full payment transaction lifecycle from POS to host, including switch behavior. · Manual Testing Mastery: Ability to design, execute, and document detailed test cases for complex financial transactions. · Defect Lifecycle Management: Proficiency in logging, tracking, and prioritizing issues using JIRA, Bugzilla, HP ALM. · API Testing: Hands-on experience testing payment APIs manually using Postman, SoapUI, or equivalent tools. · SQL & Database Testing: Strong ability to validate financial data using SQL queries in databases like Oracle, PostgreSQL, MySQL. · Compliance & Regulatory Knowledge: Solid understanding of PCI DSS, EMV, PSD2, ISO 20022, ISO 8583 & SWIFT protocols. · End-to-End Testing Expertise: Ability to validate cross-border transactions, multi-currency processing, tax handling, and compliance reporting. · Agile delivery Toolsets: Expertise in using JIRA, Confluence, TestRail, Bugzilla etc. for bug reporting and management · Test management tools: E.g., TestRail, QTest to organize and manage test cases · Mobile testing tools and frameworks: E.g., Appium, Espresso) for iOS and Android applications. Soft Skills ● Highly Impactful communication - Oral, written and presentation to effectively communicate with stakeholders, present and document test plans, test reports, test cases and defects. ● High precision in validating financial transactions, calculations, and reports. ● Ability to detect hidden issues in financial workflows and suggest improvements. ● Candidate must have an independent work ethic, strong communication skills for effective team mentoring and handling. ● Excellent proficiency in English language as the role requires to mentor the junior team members. ● Ability to work autonomously and produce high-quality outputs with minimal daily interventions. ● Demonstrates leadership qualities and mentor the team as needed. ● Professionalism, Positivity, Integrity, and ability to handle confidential information. ● Ability to take ownership of enhancing the QA processes and drive continuous improvement initiatives. Primary Responsibilities 1. Ownership & Leadership of QA for Payment Systems: · Independently manage the entire QA process for the payment system. · Define test strategies, plans and execution roadmaps for payment transactions. · Ensure the end-to-end test coverage including payment flows, settlements, refunds, chargebacks, and reconciliations. · To work closely with business analysts, developers, and product owners to validate requirements and assess risk. · Provide detailed test reports, defect analysis, and risk assessments to stakeholders. 2. Comprehensive Functional & System Testing: · Perform manual functional testing for: o Transaction Processing: Authorization, Clearing, Settlement, Refunds & Reversals. o Payment Gateway Integrations: Visa, Mastercard, AMEX, PayPal, UPI, SWIFT, ACH, SEPA etc. o Reconciliation & Ledger Validation: Ensure transactional consistency across systems. o Card & Bank Payments: Debit/Credit Cards, Net Banking & Digital Wallets. o Cross-Border Payments: Validate multi-currency payments and exchange rate calculations. · Verify invoice generation, fee calculation, and taxation rules. 3. Compliance & Security Testing: · Ensure adherence to PCI DSS, ISO 8583, ISO 20022, PSD2, EMV, SWIFT regulatory standards. · Validate data encryption, tokenization, and masking of sensitive payment data. · Conduct manual security validation to identify data leaks, incorrect permissions, and risk vulnerabilities in payment transactions. · Test fraud detection mechanisms and ensure that alerts are triggered for suspicious activities. 4. API & Backend Testing: · Manually validate REST/SOAP API responses using Postman, SoapUI, or equivalent tools. · Test payment APIs for accuracy, response codes, and error handling. · Validate database transactions using SQL queries to confirm data integrity, reconciliation, and reporting. 5. Performance & Failure Scenario Testing (Manually Identifying System Weaknesses): · Conduct boundary value testing for large transaction amounts. · Simulate failed transactions, declined payments, and chargeback cases. · Identify edge cases in payment failure handling (e.g., expired cards, incorrect CVV, insufficient balance). 6. Defect Management & Risk Reporting: · Log defects in JIRA, HP ALM, Bugzilla, or equivalent tools with clear reproduction steps. · Categorize defects based on severity and prioritize critical payment bugs. · Perform root cause analysis for recurring payment issues and suggest preventive measures. · Collaborate with developers, support teams, and business analysts to ensure quick issue resolution. 7. UAT Support & Production Validation: · Assist business teams in conducting User Acceptance Testing (UAT). · Validate production deployments to ensure smooth go-live without financial discrepancies. · Provide post-production monitoring support to confirm system stability. Mandatory / MOST IMPORTANT Pre-requisites Ø A total of 10 years and a minimum of 3-5 years of professional experience working specifically in payment systems or Financial application testing. Ø Ability to balance quality, security, usability, and efficiency in payment system testing. Ø Bachelor’s / Master’s degree in Computer Science, Information Technology or a related field is preferred. Ø Strong knowledge of payment system architectures and components, including payment gateways, processors, acquirers, issuers, card networks, and digital wallets. Ø Experience in working for International clients and projects with cross-cultural audiences. Ø Understanding of the end-to-end payment processing lifecycle, including authorization, authentication, clearing, and settlement. Ø Decent exposure to performance testing tools (e.g., JMeter, LoadRunner) to evaluate system performance under load. Ø Understanding of Security testing practices and tools (e.g., OWASP ZAP, Burp Suite) to identify vulnerabilities Ø Experience working in Agile or Scrum environments for iterative development and delivery. Ø Ability to manage multiple projects, coordinate tasks, and ensure timely delivery of test reports. Desirable ● Experience with performance and load testing tools (e.g., JMeter). ● Proficiency with CI/CD tools (e.g., Jenkins, Travis CI, CircleCI) ● Exposure to cloud-based payment solutions (AWS, Azure, or Google Cloud). ● Familiarity with risk management and fraud detection techniques in payment systems. ● Certifications such as Certified Software Tester (CSTE), Certified Payment Professional (CPP), or Certified Payment Card Industry Security Auditor (CPISA). ● Familiarity with BDD tools and frameworks (e.g., Cucumber, Spec Flow) Additional Information ● We offer competitive salary package and a comprehensive benefits package. ● You will have the opportunity to work on exciting and impactful projects. ● You will be joining a collaborative and inclusive work environment. ● Enjoy continuous learning and professional development opportunities. ● Candidate is expected to work with the client’s team based in Australia (On split time zones i.e. both IST Or AEST on need basis) Show more Show less
Posted 1 day ago
0 years
0 Lacs
Kochi, Kerala, India
Remote
Job Summary Working as part of a dynamic and fast-growing engineering team the DevOps Engineer will play an important part in our modernisation programme as the balance of our hosting environments shifted from managed hosting to AWS cloud over last couple years as well participating in .NET framework upgrades and shift towards cloud-native solutions. The DevOps Engineer will be a key resource managing the AWS Cloud Infrastructure in the team but will be equally happy to roll up their sleeves getting invo Responsibilities The DevOps Engineer will be a key resource managing the AWS Cloud Infrastructure in the team but will be equally happy to roll up their sleeves getting involved in the full range of DevOps activities such as the deployment of Meritsofts heterogenous software and ensuring the environments are secure automated and stable. The DevOps Lead Engineer works with all Meritsoft products in all portfolios and ensures a consistent method of automated software builds packaging methods and deployment processes. Management of AWS Cloud infrastructure including Windows and Linux servers. Network configuration such as managing subnets firewalls load-balancing within our private and public cloud environments Managing data encryption standards and ensuring we follow best practice security techniques Manual and Automated Deployment of Meritsoft software and third-party software installations both to Cloud and on-premises environments Implement automation tools and frameworks to improve the processes (CI/CD pipelines) Working with container orchestration engines such as Kubernetes. Administration and configuration changes to physical and virtual servers User account rights management on Source Code repositories (GitHub) User account and rights management on automated build and deploy systems (CircleCI) Monitoring security of virtual network boundaries and applying countermeasures in the form of virus protection and firewall rules where applicable Troubleshoot internal production issues and perform first/second line support as required (CircleCI/SQL Server/Oracle) Set-up and configuration of Microsoft Remote Desktop Services for access to our environments Co-ordinate with the development team to streamline code deployment (CircleCI) Further duties as assigned by the DevSecOps Manager Managing and co-ordinating team members and juniors in DevOps practices and implementation. Show more Show less
Posted 1 day ago
5.0 years
0 Lacs
Panaji, Goa
On-site
Education: Bachelor’s or master’s in computer science, Software Engineering, or a related field (or equivalent practical experience). About the Role We’re creating an internal platform that turns data-heavy engineering workflows—currently spread across spreadsheets, PDFs, e-mail, and third-party portals—into streamlined, AI-assisted services. You’ll own large pieces of that build: bringing data in, automating analysis with domain–specific engines, integrating everyday business tools, and partnering with a data analyst to fine-tune custom language models. The work is hands-on and highly autonomous; you’ll design, code, deploy, and iterate features that remove manual effort for our engineering and project-management teams. What You’ll Do AI & LLM Workflows – prototype and deploy large-language-model services for document parsing, validation, and natural-language Q&A. Automation Services – build Python micro-services that convert unstructured project files into structured stores and trigger downstream calculation tools through their APIs. Enterprise Integrations – connect calendars, project-tracking portals, and document libraries via REST / Graph APIs and event streams. DevOps & Cloud – containerize workloads, write CI/CD pipelines, codify infrastructure (Terraform/CloudFormation) and keep runtime costs in check. Quality & Security – maintain tests, logging, RBAC, encryption, and safe-prompt patterns. Collaboration – document designs clearly, demo working proofs to stakeholders, and coach colleagues on AI-assisted development practices. You’ll Need 5+ years professional software-engineering experience, including 3+ years Python. Proven track record shipping AI / NLP / LLM solutions (OpenAI, Azure OpenAI, Hugging Face, or similar). Practical DevOps skills: Docker, Git, CI/CD pipelines, and at least one major cloud platform. Experience integrating external SDKs or vendor APIs (engineering, GIS, or document-management domains preferred). Strong written / verbal communication and the discipline to work independently from loosely defined requirements. Nice-to-Have Exposure to engineering or construction data (drawings, 3-D models, load calculations, etc.). Modern front-end skills (React / TypeScript) for dashboard or viewer components. Familiarity with Power Automate, Graph API, or comparable workflow tools. How We Work Autonomy + Ownership – plan your own sprints, defend technical trade-offs, own deliverables end-to-end. AI-Augmented Development – we encourage daily use of coding copilots and chat-based problem solving for speed and clarity. If you enjoy blending practical software engineering with cutting-edge AI tooling to eliminate repetitive work, we’d like to meet you. Job Types: Full-time, Permanent Pay: ₹80,000.00 - ₹90,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 30/06/2025
Posted 1 day ago
0.0 years
0 Lacs
Hyderabad, Telangana
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in automating infrastructure provisioning, Developer Operations (DevOps), integration, or delivery. Experience in networking, compute infrastructure (e.g., servers, databases, firewalls, load balancers) and architecting, developing, or maintaining cloud solutions in virtualized environments. Experience in scripting with Terraform and Networking, DevOps, Security, Compute, Storage, Hadoop, Kubernetes, or Site Reliability Engineering. Preferred qualifications: Certification in Cloud with experience in Kubernetes, Google Kubernetes Engine, or similar. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Experience with IT security practices like identity and access management, data protection, encryption, certificate and key management. Experience with Google Cloud Platform (GCP) techniques like prompt engineering, dual encoders, and embedding vectors. Experience in building prototypes or applications. Experience in one or more of the following disciplines: software development, managing operating system environments (Linux or related), network design and deployment, databases, storage systems. About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Provide domain expertise in cloud platforms and infrastructure to solve cloud platform tests. Work with customers to design and implement cloud based technical architectures, migration approaches, and application optimizations that enable business objectives. Be a technical advisor and perform troubleshooting to resolve technical tests for customers. Create and deliver best practice recommendations, tutorials, blog articles, and sample code. Travel up to 30% for in-region for meetings, technical reviews, and onsite delivery activities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Posted 1 day ago
0.0 - 10.0 years
0 Lacs
Pune, Maharashtra
On-site
Our software engineers at Fiserv bring an open and creative mindset to a global team developing mobile applications, user interfaces and much more to deliver industry-leading financial services technologies to our clients. Our talented technology team members solve challenging problems quickly and with quality. We're seeking individuals who can create frameworks, leverage developer tools, and mentor and guide other members of the team. Collaboration is key and whether you are an expert in a legacy software system or are fluent in a variety of coding languages you're sure to find an opportunity as a software engineer that will challenge you to perform exceptionally and deliver excellence for our clients. Full-time Entry, Mid, Senior Yes (occasional), Minimal (if any) Responsibilities Requisition ID R-10359016 Date posted 06/16/2025 End Date 06/30/2025 City Pune State/Region Maharashtra Country India Location Type Onsite Calling all innovators – find your future at Fiserv. We’re Fiserv, a global leader in Fintech and payments, and we move money and information in a way that moves the world. We connect financial institutions, corporations, merchants, and consumers to one another millions of times a day – quickly, reliably, and securely. Any time you swipe your credit card, pay through a mobile app, or withdraw money from the bank, we’re involved. If you want to make an impact on a global scale, come make a difference at Fiserv. Job Title SAP PO, Specialist What does a great SAP PO, Specialist, Software Development Engineering do? Collaborate with Functional Team in OTC, PTP, RTR and Technical team of Financial System Team and help organization to solve complex Technical Architectural problems and build sophisticated business solutions in SAP S4 HANA environment. What you will do: Design the solution by gathering information from the client and prepare detailed requirement specifications (functional and non-functional), use cases, and business process recommendations. Design, Build, Test and Deploy solutions in SAP PO Monitor system interfaces, identify issues, and provide quicker remediation for integration issues. Troubleshoot and resolve critical issues of bank, billing, procurement systems on urgent basis. Work on B2B, B2P solutions involving SAP and Non-SAP system. Handle File to File, File to IDOC, Proxy and Bank interfaces in a secure manner. Design, build, maintain Integration Repository, Integration directory and maintain other configurations in PO. Design, Build and implement solutions in PO connecting SAP S4 AHANA system and allied systems like BW, BPC,SAC. What you will need to have: SAP PI/PO experience of 8-10 years Implementation experience of 2 projects. SAP PO Experience is a must with 1 implementation and/or support exposure. Experience in migrating PI Dual stack to PO 7.5 single stack. Experience in certificate-based authentication PGP encryption Experience in managing onshore and offshore projects. Experience in Proxy, SOAP, REST, HTTP, SFTP interfaces. Experience in end to end testing interfaces with Banks and other Financial integrations Experience in Ariba integration. Extensive experience on Monitoring and Troubleshooting capabilities of SAP PO such as Message Monitoring, Component Monitoring, Performance Monitoring, End-to-End Monitoring in Runtime Workbench Hands on experience in Graphical, XSLT and JAVA mapping technologies. What would be great to have: CPI Work Experience / Knowledge ABAP Knowledge Thank you for considering employment with Fiserv. Please: Apply using your legal name Complete the step-by-step profile and attach your resume (either is acceptable, both are preferable). Our commitment to Diversity and Inclusion: Fiserv is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, gender, gender identity, sexual orientation, age, disability, protected veteran status, or any other category protected by law. Note to agencies: Fiserv does not accept resume submissions from agencies outside of existing agreements. Please do not send resumes to Fiserv associates. Fiserv is not responsible for any fees associated with unsolicited resume submissions. Warning about fake job posts: Please be aware of fraudulent job postings that are not affiliated with Fiserv. Fraudulent job postings may be used by cyber criminals to target your personally identifiable information and/or to steal money or financial information. Any communications from a Fiserv representative will come from a legitimate Fiserv email address.
Posted 1 day ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in automating infrastructure provisioning, Developer Operations (DevOps), integration, or delivery. Experience in networking, compute infrastructure (e.g., servers, databases, firewalls, load balancers) and architecting, developing, or maintaining cloud solutions in virtualized environments. Experience in scripting with Terraform and Networking, DevOps, Security, Compute, Storage, Hadoop, Kubernetes, or Site Reliability Engineering. Preferred qualifications: Certification in Cloud with experience in Kubernetes, Google Kubernetes Engine, or similar. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Experience with IT security practices like identity and access management, data protection, encryption, certificate and key management. Experience with Google Cloud Platform (GCP) techniques like prompt engineering, dual encoders, and embedding vectors. Experience in building prototypes or applications. Experience in one or more of the following disciplines: software development, managing operating system environments (Linux or related), network design and deployment, databases, storage systems. About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Provide domain expertise in cloud platforms and infrastructure to solve cloud platform tests. Work with customers to design and implement cloud based technical architectures, migration approaches, and application optimizations that enable business objectives. Be a technical advisor and perform troubleshooting to resolve technical tests for customers. Create and deliver best practice recommendations, tutorials, blog articles, and sample code. Travel up to 30% for in-region for meetings, technical reviews, and onsite delivery activities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2