Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 years
25 Lacs
Cuttack, Odisha, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Bhubaneswar, Odisha, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Guwahati, Assam, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Ranchi, Jharkhand, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Raipur, Chhattisgarh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Jamshedpur, Jharkhand, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Amritsar, Punjab, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Jaipur, Rajasthan, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Greater Lucknow Area
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Thane, Maharashtra, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Nashik, Maharashtra, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Kanpur, Uttar Pradesh, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
1.0 years
25 Lacs
Nagpur, Maharashtra, India
Remote
Experience : 1.00 + years Salary : INR 2500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Yugen AI) (*Note: This is a requirement for one of Uplers' client - Yugen AI) What do you need for this opportunity? Must have skills required: Rust, Rust programming, AWS, GCP, Kafka, NATS, Grafana/Prometheus, Blockchain Yugen AI is Looking for: We are looking for backend Engineers with 1–3 years of production experience shipping and supporting backend code. You will be a part of our team owning the real-time ingestion and analytics layer that powers customer-facing dashboards, trading tools and research. Responsibilities Design, build, and scale streaming workflows that move and process multi-terabyte (and rapidly growing) datasets. Guarantee end-to-end reliability by owning performance, fault-tolerance, and cost efficiency from source to sink. Instrument every job with tracing, structured logs, and Prometheus metrics so every job tells you how it’s doing. Publish Grafana dashboards and alerts for latency, throughput, and failure rates; act on them before users notice. Partner with DevOps to containerize workloads and automate deployments. Automate reconciliation checks, confirm completeness, and replay late or corrected records to maintain pristine datasets. Collaborate with stakeholders to verify data completeness, run automated reconciliation checks, and re-process late or corrected data. Skills Proficient in Rust – comfortable with ownership, borrowing, async/await, cargo tooling, and profiling/optimisation. Stream-processing – have built or maintained high-throughput pipelines on NATS (ideal) or Kafka. Deep systems engineering skills – you think about concurrency models, memory footprints, network I/O, back-pressure, and graceful degradation; can instrument code with tracing, metrics, and logs. ClickHouse (or similar OLAP) – able to design MergeTree tables, reason about partitions / order-by keys, and optimise bulk inserts. Cloud– have deployed containerised workloads on AWS or GCP, wired up CI/CD, and tuned instance types or autoscaling groups. Nice-to-have – exposure to blockchain or high-volume financial data streams. How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 1 month ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Join our team as a Solutions Analyst II and be at the forefront of driving technical innovation and strategic business solutions. Your role will be key to transforming complex challenges into efficient, tailored solutions, fostering both personal and professional growth. As a Solution Analyst II within our strategic Corporate Technology team, you will play a pivotal role in identifying, enhancing, and creating tech solutions that propel our strategic objectives. This role offers a unique opportunity to gain insights into high-priority projects and work collaboratively with peers across the organization. Positioned at the crossroads of business and technology, you will enhance your comprehension of business procedures and data analysis, while further developing your leadership, management, and communication abilities. Regardless of your career trajectory, your contributions will have a significant impact, and you will form enduring relationships with exceptional colleagues and mentors. Job Responsibilities Contribute to data-driven decision-making by extracting insights from large, diverse data sets and applying data analytics techniques Collaborate with cross-functional teams to provide input on architecture designs and operating systems, ensuring alignment with business strategy and technical solutions Assist in managing project dependencies and change control by demonstrating adaptability and leading through change in a fast-paced environment Promote continuous improvement initiatives by identifying opportunities for process enhancements and applying knowledge of principles and practices within the Solutions Analysis field Work with Firm wide recommended data modeling tools to design, develop data warehouse systems/data mart, OLTP, OLAP/BI. Perform Data Analysis and Data Profiling on different kinds of Datasets (Structured, Unstructured) to derive insights based on data to enable better decision making. Guide the work of others, ensuring timely completion and adherence to established principles and practices Required Qualifications, Capabilities, And Skills Formal training or certification in software engineering concepts and 2+ years of applied experience. 2+ years of experience or equivalent expertise in solutions analysis, with a focus on eliciting and documenting business and data flow requirements Use Cucumber with Gherkin to automate test cases in a BDD step driven approach. Use Java and Groovy for writing efficient step definitions. Follow strong agile practices through the QA lifecycle with JIRA. Have exposure to CICD for build regression. Core Java ,Groovy,Testing Frameworks: JUnit, Powermock / Mockito, Cucumber, Mutation Testing Strong written communication skills, with a proven ability to effectively translate complex information for diverse stakeholder audiences Preferred Qualifications, Capabilities, And Skills Previous banking or credit card industry experience Knowledge of AWS or any open cloud exposure About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Posted 1 month ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 7+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance to Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. JobCategory:BigData
Posted 1 month ago
7.0 years
8 - 10 Lacs
Hyderābād
On-site
Job Description: Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 7+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance to Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 1 month ago
7.0 years
2 - 9 Lacs
Hyderābād
On-site
Job Description: Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 7+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance to Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 1 month ago
2.0 years
5 - 9 Lacs
Hyderābād
On-site
JOB DESCRIPTION Join our team as a Solutions Analyst II and be at the forefront of driving technical innovation and strategic business solutions. Your role will be key to transforming complex challenges into efficient, tailored solutions, fostering both personal and professional growth. As a Solution Analyst II within our strategic Corporate Technology team, you will play a pivotal role in identifying, enhancing, and creating tech solutions that propel our strategic objectives. This role offers a unique opportunity to gain insights into high-priority projects and work collaboratively with peers across the organization. Positioned at the crossroads of business and technology, you will enhance your comprehension of business procedures and data analysis, while further developing your leadership, management, and communication abilities. Regardless of your career trajectory, your contributions will have a significant impact, and you will form enduring relationships with exceptional colleagues and mentors. Job responsibilities Contribute to data-driven decision-making by extracting insights from large, diverse data sets and applying data analytics techniques Collaborate with cross-functional teams to provide input on architecture designs and operating systems, ensuring alignment with business strategy and technical solutions Assist in managing project dependencies and change control by demonstrating adaptability and leading through change in a fast-paced environment Promote continuous improvement initiatives by identifying opportunities for process enhancements and applying knowledge of principles and practices within the Solutions Analysis field Work with Firm wide recommended data modeling tools to design, develop data warehouse systems/data mart, OLTP, OLAP/BI. Perform Data Analysis and Data Profiling on different kinds of Datasets (Structured, Unstructured) to derive insights based on data to enable better decision making. Guide the work of others, ensuring timely completion and adherence to established principles and practices Required qualifications, capabilities, and skills Formal training or certification in software engineering concepts and 2+ years of applied experience. 2+ years of experience or equivalent expertise in solutions analysis, with a focus on eliciting and documenting business and data flow requirements Use Cucumber with Gherkin to automate test cases in a BDD step driven approach. Use Java and Groovy for writing efficient step definitions. Follow strong agile practices through the QA lifecycle with JIRA. Have exposure to CICD for build regression. Core Java ,Groovy,Testing Frameworks: JUnit, Powermock / Mockito, Cucumber, Mutation Testing Strong written communication skills, with a proven ability to effectively translate complex information for diverse stakeholder audiences Preferred qualifications, capabilities, and skills Previous banking or credit card industry experience Knowledge of AWS or any open cloud exposure ABOUT US JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. ABOUT THE TEAM Our professionals in our Corporate Functions cover a diverse range of areas from finance and risk to human resources and marketing. Our corporate teams are an essential part of our company, ensuring that we’re setting our businesses, clients, customers and employees up for success.
Posted 1 month ago
8.0 years
0 Lacs
Andhra Pradesh
On-site
Dankzij de inzet van onze medewerkers levert Conduent missie kritieke diensten en oplossingen voor Fortune 100 bedrijven en meer dan 500 overheden – en creëert zo uitzonderlijke uitkomsten voor onze klanten en de miljoenen mensen die op hen rekenen. Jij hebt de kans om persoonlijk te groeien, het verschil en onderdeel te zijn van een cultuur waar individualiteit belangrijk is en iedere dag wordt gewaardeerd. Primary Skills: Strong knowledge and experience on Data warehousing concepts Work experience in writing Oracle SQL or PL/SQL , Data Analysis, profiling and validation Working knowledge of Oracle ODI (Oracle Data Integrator) and OLAP concepts Develop and prepare strategies for Business Intelligence processes for organization Experience in optimized Solution using various performance tuning methods (SQL tuning, ETL tuning (i.e. optimal configuration of transformations and memory parameters), Database tuning using Hints, Indexes, partitioning, Materialized Views, External Tables etc) Exposure to software development lifecycle, version control, Build deployment and AGILE/Scrum practices Proven leadership qualities, strong technical, organizational and communication skills Work closely with team members and management to add delivery capacity for existing and upcoming projects, enable knowledge sharing, and lend depth in key areas of data warehousing and BI solutions. Priori Work Experience: With 8 + years of experience with minimum of 3+ years as Lead/SME role Hands on with technology and willing to work in IC role if situation demands Excellent interpersonal and communication skills Bij Conduent krijgt iedereen evenveel kansen en is iedereen gelijk ongeacht: huidskleur, geloofsovertuiging, religie, afkomst, leeftijd, genderidentiteit, genderexpressie, geslacht, burgerlijke staat, seksuele geaardheid, lichamelijke of geestelijke handicap, medische aandoening, gebruik van een geleidehond of hulpdier, militaire/veteranenstatus of een andere groep die wettelijk beschermd is. Mensen met een handicap die redelijke aanpassingen nodig hebben om te solliciteren naar of meedingen naar een baan bij Conduent, kunnen dergelijke aanpassingen aanvragen door op de volgende link te klikken, het aanvraagformulier in te vullen en het verzoek in te dienen via de knop "Verzenden" onderaan het formulier. Klik hier om het formulier te openen of te downloaden. klik hier om toegang te krijgen tot het ADAAA-accommodatiebeleid van Conduent.
Posted 1 month ago
4.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Senior Analyst – Data Engineering Data and Analytics team is a multi-disciplinary technology team delivering client projects and solutions across Data Management, Visualization, Business Analytics and Automation. The assignments cover a wide range of countries and industry sectors The opportunity We are looking for Senior Analyst - Data Engineering. The main purpose of the role is to support cloud and on-prem platform analytics and data engineering projects initiated across engagement teams. The role will primarily involve conceptualizing, designing, developing, deploying and maintaining complex technology solutions which help EY solve business problems for the clients. This role will work closely with technical architects, product and business subject matter experts (SMEs), back-end developers and other solution architects and is also onshore facing. This role will be instrumental in designing, developing, and evolving the modern data warehousing solutions and data integration build-outs using cutting edge tools and platforms for both on-prem and cloud architectures. In this role you will be coming up with design specifications, documentation, and development of data migration mappings and transformations for a modern Data Warehouse set up/data mart creation and define robust ETL processing to collect and scrub both structured and unstructured data providing self-serve capabilities (OLAP) in order to create impactful decision analytics reporting. Discipline : Information Management & Analysis Role Type : Data Architecture & Engineering A Data Architect & Engineer at EY: Uses agreed-upon methods, processes and technologies to design, build and operate scalable on-premises or cloud data architecture and modelling solutions that facilitate data storage, integration, management, validation and security, supporting the entire data asset lifecycle. Designs, builds and operates data integration solutions that optimize data flows by consolidating disparate data from multiple sources into a single solution. Works with other Information Management & Analysis professionals, the program team, management and stakeholders to design and build analytics solutions in a way that will deliver business value. Skills Cloud Computing, Business Requirements Definition, Analysis and Mapping, Data Modelling, Data Fabric, Data Integration, Data Quality, Database Management, Semantic Layer Effective Client Communication, Problem solving / critical thinking, Interest and passion for Technology, Analytical Thinking, Collaboration Your Key Responsibilities Evaluating and selecting data warehousing tools for business intelligence, data population, data management, metadata management and warehouse administration for both on-prem and cloud based engagements Strong working knowledge across the technology stack, including ETL, ELT, data analysis, metadata, data quality, audit and design Design, develop, and test in ETL tool environment (GUI/canvas steered tools to create workflows) Experience in design documentation (data mapping, technical specifications, production support, data dictionaries, test cases, etc.) Provides technical guidance to a team of data warehouse and business intelligence developers Coordinate with other technology users to design and implement matters of data governance, data harvesting, cloud implementation strategy, privacy, and security Adhere to ETL/Data Warehouse development Best Practices Responsible for Data orchestration, ingestion, ETL and reporting architecture for both on-prem and cloud (MS Azure/AWS/GCP) Assisting the team with performance tuning for ETL and database processes Skills And Attributes For Success Minimum of 4 years of total experience with Data warehousing/ Business Intelligence field Solid hands-on 3+ years of professional experience with creation and implementation of data warehouses on client engagements and helping create enhancements to a data warehouse Strong knowledge of data architecture for staging and reporting schemas, data models and cutover strategies using industry standard tools and technologies Architecture design and implementation experience with medium to complex on-prem to cloud migrations with any of the major cloud platforms (preferably AWS/Azure/GCP) Minimum 3+ years experience in Azure database offerings [Relational, NoSQL, Datawarehouse] 2+ years hands-on experience in various Azure services preferred – Azure Data Factory, Kafka, Azure Data Explorer, Storage, Azure Data Lake, Azure Synapse Analytics ,Azure Analysis Services & Databricks Minimum of 3 years of hands-on database design, modelling and integration experience with relational data sources, such as SQL Server databases, Oracle/MySQL, Azure SQL and Azure Synapse Knowledge and direct experience using business intelligence reporting tools (Power BI, Alteryx, OBIEE, Business Objects, Cognos, Tableau, MicroStrategy, SSAS Cubes etc.) Strong creative instincts related to data analysis and visualization. Curiosity to learn the business methodology, data model and user personas. Strong understanding of BI and DWH best practices, analysis, visualization, and latest trends. Experience with the software development life cycle (SDLC) and rules of product development, such as installation, upgrade and namespace management Solid thoughtfulness, technical and problem solving skills Excellent written and verbal communication skills To qualify for the role, you must have Bachelor’s or equivalent degree in computer science, or related field, required. Advanced degree or equivalent business experience preferred Fact steered and thoughtfulness with excellent attention to details Hands-on experience with data engineering tasks such as building analytical data records and experience manipulating and analyzing large volumes of data Relevant work experience of minimum 4 to 6 years in a big 4 or technology/ consulting set up Ideally, you’ll also have Ability to think strategically/end-to-end with result-oriented mindset Ability to build rapport within the firm and win the trust of the clients Willingness to travel extensively and to work on client sites / practice office locations What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-prominent, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY SaT practices globally with prominent businesses across a range of industries What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success, as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 month ago
3.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Oracle Data Integrator (ODI) consultant with overall experience of 3+ years of relevant experience in the implementation of Batch/Real-time Integrations using ODI 11g. Ability to customize knowledge modules as per the requirement. Should have strong design and development skills. Ability to design and develop Interfaces, Packages, Load plans, user functions, variables and sequences in ODI. Understanding of ODI/ODQ administration, maintenance and configuration skills. Experience working with multiple source/target systems such as Oracle, MS SQL Server, XML files, flat files, MS access/excel documents. Exposure to OLAP, OLTP, Data warehouse, Data mart development, Fact and Dimensional DB designs. Should have experience in developing dimensions and fact tables using ODI. Experience in high data volume environments/ performance tuning in ODI. Good to have exposure on ODI Administration and Load Balancing. Should be able to configure topology for all the technologies. Should be able to configure Standalone and Java EE agents. Good to have OAS/OAC. Exposure to CDC / Journalizing implementations and customizing knowledge modules. Experience in Modelling (logical and Physical) Warehouse and Marts Strong database design, relational and dimensional data modelling. Experience in writing complex queries, Stored Procedures in PL/SQL. Experience with UNIX Operating System, Windows Systems. Experience on ODI version 12c would be an added advantage. Qualifications Bachelor's Degree
Posted 1 month ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Job Description – External: We are hiring a Senior Data Engineer with deep expertise in Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics to join our high-performing team. The ideal candidate will have a proven track record in designing, building, and optimizing big data pipelines and architectures while leveraging their technical proficiency in cloud-based data engineering. This role requires a strategic thinker who can bridge the gap between raw data and actionable insights, enabling data-driven decision-making for large-scale enterprise initiatives. A strong foundation in distributed computing, ETL frameworks, and advanced data modeling is crucial. The individual will work closely with data architects, analysts, and business teams to deliver scalable and efficient data solutions. Key Responsibilities: Data Engineering & Architecture: Design, develop, and maintain high-performance data pipelines for structured and unstructured data using Azure Data Bricks and Apache Spark. Build and manage scalable data ingestion frameworks for batch and real-time data processing. Implement and optimize data lake architecture in Azure Data Lake to support analytics and reporting workloads. Develop and optimize data models and queries in Azure Synapse Analytics to power BI and analytics use cases. Cloud-Based Data Solutions: Architect and implement modern data lakehouses combining the best of data lakes and data warehouses. Leverage Azure services like Data Factory, Event Hub, and Blob Storage for end-to-end data workflows. Ensure security, compliance, and governance of data through Azure Role-Based Access Control (RBAC) and Data Lake ACLs. ETL/ELT Development: Develop robust ETL/ELT pipelines using Azure Data Factory, Data Bricks notebooks, and PySpark. Perform data transformations, cleansing, and validation to prepare datasets for analysis. Manage and monitor job orchestration, ensuring pipelines run efficiently and reliably. Performance Optimization: Optimize Spark jobs and SQL queries for large-scale data processing. Implement partitioning, caching, and indexing strategies to improve performance and scalability of big data workloads. Conduct capacity planning and recommend infrastructure optimizations for cost-effectiveness. Collaboration & Stakeholder Management: Work closely with business analysts, data scientists, and product teams to understand data requirements and deliver solutions. Participate in cross-functional design sessions to translate business needs into technical specifications. Provide thought leadership on best practices in data engineering and cloud computing. Documentation & Knowledge Sharing: Create detailed documentation for data workflows, pipelines, and architectural decisions. Mentor junior team members and promote a culture of learning and innovation. Required Qualifications: Experience: 12+ years of experience in data engineering, big data, or cloud-based data solutions. Proven expertise with Azure Data Bricks, Azure Data Lake, and Azure Synapse Analytics. Technical Skills: Strong hands-on experience with Apache Spark and distributed data processing frameworks. Advanced proficiency in Python and SQL for data manipulation and pipeline development. Deep understanding of data modeling for OLAP, OLTP, and dimensional data models. Experience with ETL/ELT tools like Azure Data Factory or Informatica. Familiarity with Azure DevOps for CI/CD pipelines and version control. Big Data Ecosystem: Familiarity with Delta Lake for managing big data in Azure. Experience with streaming data frameworks like Kafka, Event Hub, or Spark Streaming. Cloud Expertise: Strong understanding of Azure cloud architecture, including storage, compute, and networking. Knowledge of Azure security best practices, such as encryption and key management. Preferred Skills (Nice to Have): Experience with machine learning pipelines and frameworks like MLFlow or Azure Machine Learning. Knowledge of data visualization tools such as Power BI for creating dashboards and reports. Familiarity with Terraform or ARM templates for infrastructure as code (IaC). Exposure to NoSQL databases like Cosmos DB or MongoDB. Experience with data governance tools like Azure Purview. Weekly Hours: 40 Time Type: Regular Location: IND:AP:Hyderabad / Argus Bldg 4f & 5f, Sattva, Knowledge City- Adm: Argus Building, Sattva, Knowledge City It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made. JobCategory:BigData
Posted 1 month ago
8.0 - 11.0 years
14 - 24 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job Location: Bengaluru, Hyderabad, Pune, Gurgaon, Kolkata 8+ years of overall software development experience is required primarily with Data warehouse applications. 3+ years of hands-on development experience with Relational DBs (MS SQL Server, PostgreSQL, Oracle, etc.); experience with complex stored procedures and functions using SQL • 5+ yrs of development experience with ETL tools - Informatica, Strong knowledge and experience in OLAP data modeling & design Demonstrated expertise in performance tuning in various DB environments with large volumes of data. Expertise in understanding complex business needs, analyzing, designing, and developing solutions. Communication and professional skills and the ability to navigate relationships across business units.
Posted 1 month ago
10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Manager, Analytics Engineering – Hyderabad, India. About Warner Bros. Discovery Warner Bros. Discovery, a premier global media and entertainment company, offers audiences the world's most differentiated and complete portfolio of content, brands and franchises across television, film, streaming and gaming. The new company combines Warner Media’s premium entertainment, sports and news assets with Discovery's leading non-fiction and international entertainment and sports businesses. For more information, please visit www.wbd.com. Meet Our Team The Data & Audience Platform organization is at the forefront of developing and maintaining frameworks, tools, and data products vital to WBD, including flagship streaming product Max and non-streaming products such as Films Group, Sports, News and overall WBD eco-system. The mission of the Content Lifecycle Analytics team is to enable WBD to leverage the world’s most valuable content library and achieve market leading content-driven business success. We foster unified analytics and drive data-driven use cases by leveraging a robust multi-tenant platform and semantic layer. We are committed to delivering innovative solutions that empower teams across the company to catalyze customer growth, amplify engagement, and execute timely, informed decisions, ensuring our continued success in an ever-evolving digital landscape Role Overview As a Manager-Analytics Engineering at Warner Bros. Discovery, you will play a pivotal role in driving the data and analytics strategy across various business data domains, including marketing, commerce, finance, customer, and content. Reporting to the VP of Content Lifecycle Analytics, you will lead a team of skilled data and analytics engineers to build and maintain state-of-the-art analytics solutions. Your work will support Warner Bros. Discovery's mission to deliver innovative and data-driven insights that empower global & regional teams across the company. In this role, you will understand data relationships across domains, contribute to the design of reusable semantic models, and effectively communicate data findings and insights to non-technical stakeholders through storytelling, presentations, and reports. Collaboration is key, as you will work closely with cross-functional teams, including data scientists, data engineers, business analysts, and domain experts, to understand business needs and align data efforts with organizational goals. Staying updated with the latest analytics tools, platforms, and technologies will be essential to your success. You will lead by example and teach best practices by demonstrating your own technical competency with these languages, tools, and technology platforms. You will also be responsible for ensuring data privacy, governance, and cloud cost management. As a leader, you will promote a culture of experimentation and data-driven innovation, inspiring and motivating your team through internal and external presentations and other speaking opportunities. You will also play a key role in hiring, mentoring, and coaching engineers, helping to build an analytics & engineering team that prioritizes empathy, diversity, and inclusion Responsibilities Lead and mentor a team of analytics engineers, ensuring productivity, focus, and motivation in a dynamic environment. Design, review and develop analytical solutions by integrating data from multiple sources, including databases, APIs, and other sources. Build and maintain data pipelines to generate insights and support various business functions. Implement data validation and quality checks to ensure data integrity. Perform exploratory data analysis (EDA) to understand data distributions and relationships. Utilize analytical tools and techniques to uncover correlations, trends, variations, and outliers to gain a comprehensive understanding of the data your team works with. Employ data mining techniques to identify patterns or leverage data visualization to turn data into easy-to-understand visual formats like charts and graphs. Communicate data findings and insights to non-technical stakeholders through storytelling, presentations, and reports. Collaborate with cross-functional teams, including data scientists, data engineers, business analysts, and domain experts, to understand business needs and align data efforts with organizational goals with a focus on addressing customer pain points. Stay updated with the latest analytics tools, platforms, and technologies, such as Python, Spark, and Looker. Give and receive feedback to and from leadership, peers, and direct reports to promote positive development and growth. Deliver facts and decisions with empathy and transparency. Ensure data privacy, governance, and cost management. Promote a culture of experimentation and data-driven innovation. Requirements Bachelor’s degree in computer science or a similar discipline. 10+ years of experience in data engineering, data science and analytics engineering. 2+ years of experience in engineering management, leading teams of data and analytics engineers. Expertise in analytical tools and frameworks, such as Looker, Tableau, or Power BI. Experience in AI-driven data analytics with cloud platforms, preferably AWS and Databricks. Proficiency in data modelling using OLAP databases, such as Snowflake and Databricks. Strong programming skills in SQL, Python, and Python-based data manipulation and visualization libraries. Experience with orchestration frameworks, such as Airflow. Familiarity with big data frameworks, such as Spark, and ML libraries, such as scikit-learn. Excellent data analytical and communication skills. Ability to work in a fast-paced, high-pressure, agile environment. Strong interpersonal, communication, and presentation skills. Ability to learn and teach new languages and frameworks. How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France