Home
Jobs
Companies
Resume

88748 Sql Jobs - Page 50

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Are you a passionate Spark and Scala developer looking for an exciting opportunity to work on cutting-edge big data projects? Look no further! Delhivery is seeking a talented and motivated Spark & Scala Expert to join our dynamic team. Responsibilities: Develop and optimize Spark applications to process large-scale data efficiently Collaborate with cross-functional teams to design and implement data-driven solutions Troubleshoot and resolve performance issues in Spark jobs Stay up-to-date with the latest trends and advancements in Spark and Scala technologies. Requirements: Proficient in Redshift, data pipelines, Kafka, Real-time streaming, connectors, etc 3+ years of professional experience with Big Data systems, pipelines, and data processing Strong experience with Apache Spark, Spark Streaming, and Spark SQL Solid understanding of distributed systems, Databases, System design, and big data processing framework Familiarity with Hadoop ecosystem components (HDFS, Hive, HBase) is a plus Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Data Scientist Job Description Position: Data Scientist Experience Level: [Junior/Mid-level] - 3-5 years Job Summary We are seeking a Data Scientist to join our team to analyze complex datasets, develop machine learning models, and provide data-driven insights to support business decisions. Key Responsibilities Data Analysis & Exploration Perform comprehensive exploratory data analysis (EDA) on large datasets. Identify patterns, trends, and anomalies in data. Conduct statistical analysis to validate business hypotheses. Create data visualizations to communicate findings effectively. Assess and ensure data quality and integrity. Write complex SQL queries to extract and manipulate data Machine Learning & Modeling Design and develop machine learning models for business problems like (XG Boost, Logistic Regression, DNN, RNN etc) Implement supervised and unsupervised learning algorithms Perform feature engineering and selection Evaluate model performance using appropriate metrics Deploy and monitor machine learning models in production Programming & Development Develop data analysis scripts and automation tools using Python Build data pipelines and ETL processes Create reusable code libraries and functions Maintain version control and documentation standards Required Qualifications Technical Skills SQL: Advanced proficiency in writing complex queries, joins, subqueries, and database optimization Python: Strong programming skills in Python for data analysis and machine learning Exploratory Data Analysis: Expertise in EDA techniques, statistical analysis, and data visualization Machine Learning: Solid understanding of ML algorithms, model evaluation, and validation techniques Statistics: Knowledge of statistical methods, hypothesis testing, and experimental design Knowledge of any cloud like AWS, GCP or Azure is good to have Familiarity with version control systems Experience with containerization and deployment tools Good to Have:- Worked on GenAI based Projects Using GenAI for driving productivity in your work. Knowledge of PySpark is a plus Show more Show less

Posted 1 day ago

Apply

7.0 years

40 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

0.0 - 3.0 years

0 Lacs

Mohali, Punjab

On-site

Indeed logo

What You Need for this Position: Highly adaptable to change Unit Testing, Black-Box Testing, White-Box Testing, Automation, Testing Tools, Fast Learner Proactive self-starter who can work effectively individually and with project team members Excellent verbal, written presentation communication skills Proven experience as a QA, tester or similar role Experience in project management and QA methodology Ability to document and troubleshoot errors Working knowledge of test management software (e.g. qTest, Zephyr, LoadRunner), Jira, Rational and SQL Attention to detail Regression Testing and Testing Automation Extensive Use of Testing Tools and QA Reporting Angular 7, PHP 7, Python, .NET, MS SQL, MySQL Strong JavaScripts JQuery Experience with AWS cloud services will be a bonus. Mobile Device App Performance Testing Skills: SCRUM Framework, AGILE Development Met What You Will Be Doing: Write User Persona, Use Cases, Test cases Code Quality Assurance Perform Unit Test, Integration Test DB Stress Test, Injection Test Performance Test Collaborate with Business Analysts, Business Teams to develop effective strategies and test plans Execute Use Cases and Test Cases (manual or automated) and Analyze results Evaluate product code according to specifications Create logs to document testing phases and defects Report bugs and errors to development teams Conduct post-release/ post-implementation testing Work with cross-functional teams to ensure quality throughout the software development life cycle. Assure Product Quality Deliveries with Release Plans. Top Reasons to Work with Us: We're a small, fast-paced growing team tackling huge new challenges every day. Learning new concepts while working with intellectual and exceptionally talented team. Friendly and high growth work environment. Competitive compensation. Job Type: Full-time Pay: ₹450,000.00 - ₹650,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Schedule: Day shift Fixed shift Morning shift Supplemental Pay: Performance bonus Quarterly bonus Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: total work: 3 years (Required) Work Location: In person

Posted 1 day ago

Apply

0.0 years

0 Lacs

Ahmedabad, Gujarat

On-site

Indeed logo

Proficient in Python, Node.js (or Java), and React (preferred). Experience with AWS Services: S3, Lambda, DynamoDB, Bedrock, Textract, RDS, Fargate. Experience in LLM-based application development (LangChain, Bedrock, or OpenAI APIs). Strong in NLP and embeddings (via SageMaker or third-party APIs like Cohere, Hugging Face). Knowledge of vector databases (Pinecone, ChromaDB, OpenSearch, etc.). Familiar with containerization (Docker, ECS/Fargate). Excellent understanding of REST API design and security. Experience handling PDF/image-based document classification. Good SQL and NoSQL skills (MS SQL, MongoDB). Preferred Qualifications: AWS Certified – especially in AI/ML or Developer Associate. Job Types: Full-time, Fresher, Internship Pay: ₹554,144.65 - ₹1,500,000.00 per year Schedule: Day shift Morning shift Supplemental Pay: Performance bonus Ability to commute/relocate: Ahmedabad, Gujarat: Reliably commute or planning to relocate before starting work (Preferred) Work Location: In person

Posted 1 day ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role Description Hiring Location: Mumbai/Chennai/Gurgaon Job Summary We are seeking a Lead I in Software Engineering with 4 to 7 years of experience in software development or software architecture. The ideal candidate will possess a strong background in Angular and Java, with the ability to lead a team and drive technical projects. A Bachelor's degree in Engineering or Computer Science, or equivalent experience, is required. Responsibilities Interact with technical personnel and team members to finalize requirements. Write and review detailed specifications for the development of system components of moderate complexity. Collaborate with QA and development team members to translate product requirements into software designs. Implement development processes, coding best practices, and conduct code reviews. Operate in various development environments (Agile, Waterfall) while collaborating with key stakeholders. Resolve technical issues as necessary. Perform all other duties as assigned. Must-Have Skills Strong proficiency in Angular 1.X (70% Angular and 30% Java OR 50% Angular and 50% Java). Java/J2EE; Familiarity with Singleton and MVC design patterns. Strong proficiency in SQL and/or MySQL, including optimization techniques (at least MySQL). Experience using tools such as Eclipse, GIT, Postman, JIRA, and Confluence. Knowledge of test-driven development. Solid understanding of object-oriented programming. Good-to-Have Skills Expertise in Spring Boot, Microservices, and API development. Familiarity with OAuth2.0 patterns (experience with at least 2 patterns). Knowledge of Graph Databases (e.g., Neo4J, Apache Tinkerpop, Gremlin). Experience with Kafka messaging. Familiarity with Docker, Kubernetes, and cloud development. Experience with CI/CD tools like Jenkins and GitHub Actions. Knowledge of industry-wide technology trends and best practices. Experience Range 4 to 7 years of relevant experience in software development or software architecture. Education Bachelor’s degree in Engineering, Computer Science, or equivalent experience. Additional Information Strong communication skills, both oral and written. Ability to interface competently with internal and external technology resources. Advanced knowledge of software development methodologies (Agile, etc.). Experience in setting up and maintaining distributed applications in Unix/Linux environments. Ability to complete complex bug fixes and support production issues. Skills Angular 1.X,Java 11+,Sql The expectation is 60-70% in Angular primarily and 30-40% in Java. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Applied knowledge of ISO 27001 / SOC Controls Provide RCA for Technical issues. Information and Data Security principles ITIL Policies and procedures operations Comfortable in ITIL change management submissions and process, and being a CAB member Expert knowledge of SQL clusters and BCDR When to use DTU vs vCore Running daily health checks and ensuring uptime Performing backups and recoveries Applying patches and upgrades Troubleshooting and resolving database issues Documenting and optimizing database processes Collaborating with the internal IT team to ensure a seamless workflow. Configuration based, version based, policy based issues are handled Support New scope, changing scope, expanding scope Maintain keys, maintain connectivity to servers and AD Work Issues related to connectivity to data warehouse Support on issues related to Server running slowly, scaling issues Skill to Manage Physical and Virtual Servers in a large environment typically 100+ Servers. Knowledge of ITIL Knowledge of networking fundamentals. Experience in tracking server activity, performing upgrades of software, addressing technical problems. Good Documentation Skills Qualifications Must be knowledgable in best practices Accountable for ensuring SLA adherence with on time ticket acceptance and closures. Ready to work in Rotational shifts(24x5). Required to prepare Technical SOP's and bring in improvements. Managing and prioritizing assigned tasks collaborating with team members when needed – business projects, change controls, documentation, and vulnerability remediation, etc. Bachelor’s degree in a technical field, or experience and certifications showing required knowledge. Highly knowledable in performance tuning including query optimization Exceptional communication skills Comfortable working on multiple projects and issues simultaneously Demonstratable desire to learn and remain current with technical knowledge. Provide breakdowns of technical projects into steps with time estimates Collaborate with colleagues (development teams, infrastructure, management) Expert level technical troubleshooting and problem solving Knowledge with service principals, managed identities, private endpoint networking Comfortable working in an Agile-like environment, working in a backlog such as with Jira or other tools. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Tech Lead Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Chennai) Placement Type : Full-time Permanent Position (*Note: This is a requirement for one of Uplers' clients - NetXD) What do you need for this opportunity? Must have required skills: Golang, Go (Golang), Go, MongoDB, PostgreSQL, Angular, React NetXD is Looking for: Responsibilities: Lead end-to-end delivery of Golang banking/payments backend system from design to deployment, ensuring speed, reliability, and compliance with banking regulations. Mentor and guide junior developers. Collaborate with product managers, QA engineers, and DevOps teams Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 5-6 years of overall software development experience. At least 2 years of hands-on experience in Golang (mandatory). Proven experience building backend systems from scratch. Technical Skills (Mandatory): Backend Development: Golang expertise in developing high-performance backend systems. Databases: MongoDB (preferred) OR experience with SQL databases (e.g., PostgreSQL, MySQL). Messaging Systems: NATS.io (preferred) OR Kafka, RabbitMQ, IBM MQ. API Protocols: gRPC (preferred) OR RESTful APIs. Exposure to microservices architecture and distributed systems. Experience with AI-assisted coding tools (e.g., GitHub Copilot, Cline) Familiarity with CI/CD pipelines and version control (Git). Frontend: Exposure in Angular, React, or similar frameworks Preferred Skills: Banking Domain Knowledge: ISO8583, ISO20022, ACH/WIRE, FedNow, RTP, Card Payments, Double-Entry Accounting. Cloud & DevOps: AWS, Docker, Kubernetes, Terraform, or Nomad. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Tech Lead Experience : 5.00 + years Salary : Confidential (based on experience) Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Office (Chennai) Placement Type : Full-time Permanent Position (*Note: This is a requirement for one of Uplers' clients - NetXD) What do you need for this opportunity? Must have required skills: Golang, Go (Golang), Go, MongoDB, PostgreSQL, Angular, React NetXD is Looking for: Responsibilities: Lead end-to-end delivery of Golang banking/payments backend system from design to deployment, ensuring speed, reliability, and compliance with banking regulations. Mentor and guide junior developers. Collaborate with product managers, QA engineers, and DevOps teams Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Experience: 5-6 years of overall software development experience. At least 2 years of hands-on experience in Golang (mandatory). Proven experience building backend systems from scratch. Technical Skills (Mandatory): Backend Development: Golang expertise in developing high-performance backend systems. Databases: MongoDB (preferred) OR experience with SQL databases (e.g., PostgreSQL, MySQL). Messaging Systems: NATS.io (preferred) OR Kafka, RabbitMQ, IBM MQ. API Protocols: gRPC (preferred) OR RESTful APIs. Exposure to microservices architecture and distributed systems. Experience with AI-assisted coding tools (e.g., GitHub Copilot, Cline) Familiarity with CI/CD pipelines and version control (Git). Frontend: Exposure in Angular, React, or similar frameworks Preferred Skills: Banking Domain Knowledge: ISO8583, ISO20022, ACH/WIRE, FedNow, RTP, Card Payments, Double-Entry Accounting. Cloud & DevOps: AWS, Docker, Kubernetes, Terraform, or Nomad. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Overview: You’ll design and implement scalable solutions for award-winning platforms like LMX and MAX, automating media transactions and bridging media buyers and sellers. Work in an Agile, POD-based model to revolutionize the role of data and technology in OOH advertising. What You’ll Do: Architect scalable solutions aligned with business goals and market needs. Lead Agile POD teams to deliver iterative, high-impact solutions. Enhance products with advanced features like dynamic rate cards and inventory mapping. Ensure best practices in security, scalability, and performance. What You Bring: Strong expertise in cloud-based architectures, API integrations, and data analytics. Proven experience in Agile environments and POD-based execution. Technical proficiency in Java, Angular, Python, and AWS. Required Skills: 8+ years of experience as a Solution Architect. Bachelor’s/Master’s in Computer Science or related field. Proficiency in Java, Angular, Python, MongoDB, SQL, NoSQL, and AWS. Strong understanding of Agile methodologies and POD-based execution. Tech Stack: Languages: Java, Python Frontend: Angular Databases: MongoDB, SQL, NoSQL Cloud: AWS Show more Show less

Posted 1 day ago

Apply

7.0 years

40 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

7.0 years

40 Lacs

Vellore, Tamil Nadu, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

7.0 years

40 Lacs

Madurai, Tamil Nadu, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About BCS: BCS is a Cloud Tech organization, with its presence globally and its branches across India, Europe, the US, and Australia. The company was established in 2014 by a group of ambitious SAP Basis Consultants with progressive experience that amounts to a few decades, with a vision to provide intelligible and pre-eminent BASIS support and CLOUD Automation, tailor-made to the ultimate needs of each business. Job Overview: We are looking for a highly skilled SAP Basis Consultant with 8 to 12 years of experience in SAP system administration. The ideal candidate will have deep expertise in SAP architecture, installations, upgrades, and performance tuning, along with the ability to lead initiatives and mentor junior team members. This role plays a critical part in managing and optimizing SAP landscapes and supporting complex projects across various platforms and technologies. Roles & Responsibility: Manage and support SAP HANA DB, SAP S/4HANA, and related application environments. Perform SAP HANA administration, including troubleshooting, database upgrades, refreshes, and HA/DR configurations. Execute version upgrades, support pack upgrades, and database upgrade activities. Provide expert support in AMS (Application Management Services) projects across multiple SAP products, OS, and database platforms. Administer ABAP and JAVA stacks, including installations, upgrades, and ongoing maintenance. Handle daily Basis operations, monitoring, and incident resolution within AMS support scope. Take ownership of technical project leadership tasks and collaborate with cross-functional teams. Support flexible shift operations and take the lead on critical projects. Work with various SAP products such as SAP ECC, PI/PO, SAP Portal, Solution Manager (SOLMAN), BOBJ, and IDM. Troubleshoot performance bottlenecks and handle critical issues with detailed RCA documentation. Contribute to OS/DB migration and S/4HANA transformation planning and implementation. Required Skills : Strong knowledge of SAP ECC, S/4HANA, and SAP NetWeaver Expertise in HANA, Oracle, SQL Server database administration Experience with Linux and Windows operating systems Excellent communication skills, both verbal and written, with the ability to interact effectively with stakeholders at all levels. Proven track record in leading and mentoring team members and supporting junior consultants. Demonstrated ability to manage complex SAP environments and deliver high-quality outcomes. Certifications: SAP Certified Technology Associate preferred Advanced certifications in SAP S/4HANA and cloud solutions are a plus QUICK FACTS World's fastest growing SAP on Public Cloud Company 100% Retention Rate of Happy Clients since Inception 250+ Global Employees, with a 100% Staff Retention Rate in the first 9 years of the business, currently 98% Offices in Chennai, Netherlands, Sydney, UK and South Carolina What do we VALUE at BCS? PASSION - We love what we do. DETERMINATION - We always find a way to "figure it out". UNITY - We have each other's back and challenge one another to strive for better. AGILITY - We anticipate the unexpected, embrace and adapt to change. You would be working for a Company that: Is built on a dynamic and very seasoned team marching towards ONE PURPOSE. Believes in hard work, team effort, and empathy. Skills are Secondary - Attitude comes First. * Does not strive for work-life balance but rather imbibe work and fun into life to achieve success both professionally and personally. Gives opportunities to be associated with international teams for projects in Europe or Australia. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Role : Senior dotnet Developer Experience : 5 + years Location : Chennai Np : 30 days Interested Share updated resume to dpeddolla@arigs.com Role Description This is a full-time on-site role for a Senior Dotnet Developer located in Chennai. The Senior Dotnet Developer will be responsible for designing, developing, and implementing software applications using .NET technologies. Daily tasks will include coding, debugging, unit testing, and maintaining software applications. The developer will collaborate with cross-functional teams to deliver high-quality solutions and ensure alignment with business requirements. Qualifications Expertise in Object-Oriented Programming (OOP) and Programming Proficiency in .NET Core , ASP.NET MVC, C#, Web API, Sql server, CQRS, Design patterns, MediateR and Azure Develop and maintain CI/CD pipelines in Azure. Strong skills in Software Development Excellent analytical and problem-solving abilities Effective communication and collaboration skills Experience in Agile development methodologies is a plus Bachelor's degree in Computer Science, Engineering, or relevant field Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

Job Role: SAP ABAP Job Location: Chennai/Hyderabad/Kolkata/Mumbai/NCR Exp Range: 4Years to 8Years Desired Competencies (Technical/Behavioral Competency): This is a SAP DevOps technical lead role in SAP S/4 environment Good experience working on Onshore/Offshore model Should have experience in working with Interfaces to SAP Excellent interpersonal and organizational skills with ability to communicate The candidate should be ready to work in flexible timings Strong knowledge and understanding of IT Service Provider organizations is helpful Should have good integration knowledge with other areas of supply chain like PP, QM, SD, and SAP APO Knowledge of batch job monitoring and Idoc processing is mandatory. Must have experience in each of the following areas: List (Report) Programming, Dialog (Transaction) Programming, Enhancements (User Exits, BADI, BAPI etc.), Interfaces (IDOCS, Remote Function Calls, Web Services etc.) and Adobe forms/Smart forms. Hands on experience at HANA compatible developments like CDS views, AMDP, O-Data, SQL query, SQL functions and ABAP on HANA. Knowledge on ABAP Object Oriented Programming. Knowledge on SAP-Workflow Show more Show less

Posted 1 day ago

Apply

7.0 years

40 Lacs

Surat, Gujarat, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Electronic Medical Records (EMR) Good to have skills : NA Minimum 2 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Support Engineer, you will act as software detectives, providing a dynamic service identifying and solving issues within multiple components of critical business systems. Your day will involve troubleshooting and resolving technical issues to ensure seamless operations. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Proactively identify and resolve technical issues within critical business systems. - Collaborate with cross-functional teams to troubleshoot and address system malfunctions. - Develop and implement solutions to enhance system performance and reliability. - Provide technical support and guidance to end-users on system functionalities. - Document and maintain system configurations and troubleshooting procedures. Professional & Technical Skills: - Must To Have Skills: Proficiency in Electronic Medical Records (EMR). - Strong understanding of database management and SQL queries. - Experience in system monitoring and performance optimization. - Knowledge of ITIL framework and incident management processes. - Hands-on experience in diagnosing and resolving software and hardware issues. Additional Information: - The candidate should have a minimum of 2 years of experience in Electronic Medical Records (EMR). - work from office is mandatory for all working days - This position is based at our Chennai office. - A 15 years full time education is required. Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Summary: We are looking for a skilled and motivated .NET Core Developer with 2 years of professional experience. The ideal candidate will have strong backend development skills and a solid understanding of building scalable, efficient web applications using .NET Core. Key Responsibilities: Develop, test, and maintain web applications using .NET Core and C# Design and consume RESTful APIs for seamless data integration Work with SQL Server and write complex queries and stored procedures Collaborate with UI/UX designers and front-end developers to ensure smooth integration Participate in code reviews, debugging, and performance optimization Write clean, maintainable code and follow best practices in software development Contribute to unit testing and continuous improvement of the codebase Maintain proper documentation of development work and processes Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field 2 years of hands-on experience in .NET Core and C# Strong understanding of Object-Oriented Programming (OOP), SOLID principles, and design patterns Proficient in SQL Server and database design Experience with Entity Framework Core or ADO.NET Familiarity with front-end basics: HTML, CSS, JavaScript, jQuery Experience with version control systems like Git or SVN Basic knowledge of RESTful API integration Nice to Have: Experience with front-end frameworks such as Angular or React Knowledge of cloud platforms like Azure or AWS Familiarity with CI/CD pipelines and DevOps tools Show more Show less

Posted 1 day ago

Apply

7.0 years

40 Lacs

Ahmedabad, Gujarat, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Senior Python Developer (4–6 Years Experience) Location: Gurgaon Job Type: Full-Time – WFO About the Role: We are seeking a skilled and motivated Senior Python Developer with 4–6 years of experience and a B.Tech in CSE or IT. You’ll design, develop, and maintain scalable applications, working within an Agile team and collaborating across functions. Key Responsibilities: Design, write, and maintain efficient, scalable Python code. Collaborate with product managers, designers, and developers. Build and integrate RESTful APIs and microservices. Conduct code reviews and ensure best practices. Work with SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Debug, optimize, and contribute to architecture and performance. Write unit/integration tests and explore new technologies. Mentor junior developers and participate in Agile ceremonies. Required Skills & Qualifications: B.Tech in CSE, IT, or related field. 4–6 years of Python development experience. Proficient in Django, Flask, or FastAPI. Solid understanding of OOP, data structures, and algorithms. Experience with AWS/GCP/Azure, Docker, Git, CI/CD. Skilled in writing clean, maintainable code. Strong communication and Agile collaboration skills. Nice-to-Have Skills: Knowledge of React/Angular, Kubernetes, RabbitMQ/Kafka. Familiarity with GraphQL, ML, or data science concepts. Show more Show less

Posted 1 day ago

Apply

7.0 years

40 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Gartner IT: Join a world-class team of skilled engineers who build creative digital solutions to support our colleagues and clients. We make a broad organizational impact by delivering cutting-edge technology solutions that power Gartner. Gartner IT values its culture of nonstop innovation, an outcome-driven approach to success, and the notion that great ideas can come from anyone on the team. About this role: Gartner is looking for passionate and motivated Senior Software Engineers who are excited to foray into new technologies and help build / maintain data driven, scalable, and secure applications and tools, for supporting the product delivery organization (PDO) at Gartner. PDO Software Engineering teams are high velocity agile teams responsible for developing and maintaining components crucial to customer-facing channels, reporting & analysis. These components include but are not limited to web applications, microservices, devops pipelines, batch jobs and data streams etc. What you’ll do: Designing, implementing, unit, integration testing and supporting Java/Spring and JavaScript (React.js, Angular.js, JQuery.js) based applications and services. Contribute to the review and analysis of business requirements. Perform and participate in code reviews, peer inspections and technical design/specifications. Ensure code integrity standards and code best practice. Document and review detailed design. Identify and resolve web performance, usability and scalability issues. What you will need: 4-6 years of post-college experience in Software engineering, API development or related fields. The candidate should have strong qualitative and quantitative problem-solving skills along with high on ownership and accountability. Must have: 4-6 years of experience with Java/Spring framework development. Experience with Kanban or Agile Scrum development Experience in Design, and Development of web applications and Microservices using Java, JavaScript Frameworks (React/Angular etc), Docker, SQL, Jenkins Pipeline. Experience working with Postgres, or Oracle or equivalent enterprise RDBMS system. Experience in working on AWS. Experience REST based APIs Excellent understanding of Object-Oriented Programming with design patterns. Experience with DevOps and collaboration tools such as Git, Jenkins, Jira, Confluence Understanding of CSS extensions and frameworks such as SASS. Experience with AWS services like lambdas, EKS, Kinesis etc Who you are: Bachelor’s degree or foreign equivalent degree in Computer Science or a related field required Excellent communication and prioritization skills. Able to work independently or within a team proactively in a fast-paced AGILE-SCRUM environment. Owns success – Takes responsibility for successful delivery of the solutions Strong desire to improve upon their skills in software development, frameworks, and technologies. Don’t meet every single requirement? We encourage you to apply anyway. You might just be the right candidate for this, or other roles. Who are we? At Gartner, Inc. (NYSE:IT), we guide the leaders who shape the world. Our mission relies on expert analysis and bold ideas to deliver actionable, objective insight, helping enterprise leaders and their teams succeed with their mission-critical priorities. Since our founding in 1979, we’ve grown to more than 21,000 associates globally who support ~14,000 client enterprises in ~90 countries and territories. We do important, interesting and substantive work that matters. That’s why we hire associates with the intellectual curiosity, energy and drive to want to make a difference. The bar is unapologetically high. So is the impact you can have here. What makes Gartner a great place to work? Our sustained success creates limitless opportunities for you to grow professionally and flourish personally. We have a vast, virtually untapped market potential ahead of us, providing you with an exciting trajectory long into the future. How far you go is driven by your passion and performance. We hire remarkable people who collaborate and win as a team. Together, our singular, unifying goal is to deliver results for our clients. Our teams are inclusive and composed of individuals from different geographies, cultures, religions, ethnicities, races, genders, sexual orientations, abilities and generations. We invest in great leaders who bring out the best in you and the company, enabling us to multiply our impact and results. This is why, year after year, we are recognized worldwide as a great place to work. What do we offer? Gartner offers world-class benefits, highly competitive compensation and disproportionate rewards for top performers. In our hybrid work environment, we provide the flexibility and support for you to thrive — working virtually when it's productive to do so and getting together with colleagues in a vibrant community that is purposeful, engaging and inspiring. Ready to grow your career with Gartner? Join us. The policy of Gartner is to provide equal employment opportunities to all applicants and employees without regard to race, color, creed, religion, sex, sexual orientation, gender identity, marital status, citizenship status, age, national origin, ancestry, disability, veteran status, or any other legally protected status and to seek to advance the principles of equal employment opportunity. Gartner is committed to being an Equal Opportunity Employer and offers opportunities to all job seekers, including job seekers with disabilities. If you are a qualified individual with a disability or a disabled veteran, you may request a reasonable accommodation if you are unable or limited in your ability to use or access the Company’s career webpage as a result of your disability. You may request reasonable accommodations by calling Human Resources at +1 (203) 964-0096 or by sending an email to ApplicantAccommodations@gartner.com. Job Requisition ID:99402 By submitting your information and application, you confirm that you have read and agree to the country or regional recruitment notice linked below applicable to your place of residence. Gartner Applicant Privacy Link: https://jobs.gartner.com/applicant-privacy-policy For efficient navigation through the application, please only use the back button within the application, not the back arrow within your browser. Show more Show less

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greetings, TCS is looking for Dotnet Developer Experience: 2 - 6 years Only Education: Minimum 15 years of full time education (10th, 12th and Graduation) Location: Chennai, Pune, Bangalore, Kolkata, Hyderabad Must-Have: Minimum of 2 years hands on experience in Net framework, including ASP.Net, MVC, and Web API. Secondary skills required: SQL and one database such as Oracle or MongoDB. Proficiency in C# Enterprise application development experience using CI/CD pipelines. Good command of code version management. Experience in unit testing frameworks. Agile development experience. Roles require interaction with developers distributed across locations. Good communication Skills, Good analytical skills. Good-to-Have: Experience with front-end technologies such as HTML, CSS, JavaScript, and frameworks like Angular or React is a plus. Familiarity with version control systems such as Git.Experience in event driven architecture Banking exposure Thanks & Regards, Sreevidya MP Talent Acquisition Specialist Tata Consultancy Services Show more Show less

Posted 1 day ago

Apply

4.0 - 6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: GCP Teradata Engineer Location: Chennai, Bangalore, Hyderabad Experience: 4-6 Years Job Summary We are seeking a GCP Data & Cloud Engineer with strong expertise in Google Cloud Platform services, including BigQuery, Cloud Run, Cloud Storage , and Pub/Sub . The ideal candidate will have deep experience in SQL coding , data pipeline development, and deploying cloud-native solutions. Key Responsibilities Design, implement, and optimize scalable data pipelines and services using GCP Build and manage cloud-native applications deployed via Cloud Run Develop complex and performance-optimized SQL queries for analytics and data transformation Manage and automate data storage, retrieval, and archival using Cloud Storage Implement event-driven architectures using Google Pub/Sub Work with large datasets in BigQuery, including ETL/ELT design and query optimization Ensure security, monitoring, and compliance of cloud-based systems Collaborate with data analysts, engineers, and product teams to deliver end-to-end cloud solutions Required Skills & Experience 4 years of experience working with Google Cloud Platform (GCP) Strong proficiency in SQL coding, query tuning, and handling complex data transformations Hands-on experience with: BigQuery Cloud Run Cloud Storage Pub/Sub Understanding of data pipeline and ETL/ELT workflows in cloud environments Familiarity with containerized services and CI/CD pipelines Experience in scripting languages (e.g., Python, Shell) is a plus Strong analytical and problem-solving skills Show more Show less

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies