Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
Ahmedabad
On-site
Position Overview This role is responsible for defining and delivering ZURU’s next-generation data architecture—built for global scalability, real-time analytics, and AI enablement. You will lead the unification of fragmented data systems into a cohesive, cloud-native platform that supports advanced business intelligence and decision-making. Sitting at the intersection of data strategy, engineering, and commercial enablement, this role demands both deep technical acumen and strong cross-functional influence. You will drive the vision and implementation of robust data infrastructure, champion governance standards, and embed a culture of data excellence across the organisation. Position Impact In the first six months, the Head of Data Architecture will gain deep understanding of ZURU’s operating model, technology stack, and data fragmentation challenges. You’ll conduct a comprehensive review of current architecture, identifying performance gaps, security concerns, and integration challenges across systems like SAP, Odoo, POS, and marketing platforms. By month twelve, you’ll have delivered a fully aligned architecture roadmap—implementing cloud-native infrastructure, data governance standards, and scalable models and pipelines to support AI and analytics. You will have stood up a Centre of Excellence for Data, formalised global data team structures, and established yourself as a trusted partner to senior leadership. What are you Going to do? Lead Global Data Architecture: Own the design, evolution, and delivery of ZURU’s enterprise data architecture across cloud and hybrid environments. Consolidate Core Systems: Unify data sources across SAP, Odoo, POS, IoT, and media into a single analytical platform optimised for business value. Build Scalable Infrastructure: Architect cloud-native solutions that support both batch and streaming data workflows using tools like Databricks, Kafka, and Snowflake. Implement Governance Frameworks: Define and enforce enterprise-wide data standards for access control, privacy, quality, security, and lineage. Enable Metadata & Cataloguing: Deploy metadata management and cataloguing tools to enhance data discoverability and self-service analytics. Operationalise AI/ML Pipelines: Lead data architecture that supports AI/ML initiatives, including demand forecasting, pricing models, and personalisation. Partner Across Functions: Translate business needs into data architecture solutions by collaborating with leaders in Marketing, Finance, Supply Chain, R&D, and Technology. Optimize Cloud Cost & Performance: Roll out compute and storage systems that balance cost efficiency, performance, and observability across platforms. Establish Data Leadership: Build and mentor a high-performing data team across India and NZ, and drive alignment across engineering, analytics, and governance. Vendor and Tool Strategy: Evaluate external tools and partners to ensure the data ecosystem is future-ready, scalable, and cost-effective. What are we Looking for? 8+ years of experience in data architecture, with 3+ years in a senior or leadership role across cloud or hybrid environments Proven ability to design and scale large data platforms supporting analytics, real-time reporting, and AI/ML use cases Hands-on expertise with ingestion, transformation, and orchestration pipelines (e.g. Kafka, Airflow, DBT, Fivetran) Strong knowledge of ERP data models, especially SAP and Odoo Experience with data governance, compliance (GDPR/CCPA) , metadata cataloguing, and security practices Familiarity with distributed systems and streaming frameworks like Spark or Flink Strong stakeholder management and communication skills, with the ability to influence both technical and business teams Experience building and leading cross-regional data teams Tools & Technologies Cloud Platforms: AWS (S3, EMR, Kinesis, Glue), Azure (Synapse, ADLS), GCP Big Data: Hadoop, Apache Spark, Apache Flink Streaming: Kafka, Kinesis, Pub/Sub Orchestration: Airflow, Prefect, Dagster, DBT Warehousing: Snowflake, Redshift, BigQuery, Databricks Delta NoSQL: Cassandra, DynamoDB, HBase, Redis Query Engines: Presto/Trino, Athena IaC & CI/CD: Terraform, GitLab Actions Monitoring: Prometheus, Grafana, ELK, OpenTelemetry Security/Governance: IAM, TLS, KMS, Amundsen, DataHub, Collibra, DBT for lineage What do we Offer? Competitive compensation ️ 5 Working Days with Flexible Working Hours Medical Insurance for self & family Training & skill development programs Work with the Global team, Make the most of the diverse knowledge Several discussions over Multiple Pizza Parties A lot more! Come and discover us!
Posted 2 weeks ago
0 years
0 Lacs
Andhra Pradesh
On-site
Job Summary: We are looking for a skilled AWS Data Engineer with strong experience in building and managing cloud-based ETL pipelines using AWS Glue, Python/PySpark, and Athena, along with data warehousing expertise in Amazon Redshift. The ideal candidate will be responsible for designing, developing, and maintaining scalable data solutions in a cloud-native environment. Design and implement ETL workflows using AWS Glue, Python, and PySpark. Develop and optimize queries using Amazon Athena and Redshift. Build scalable data pipelines to ingest, transform, and load data from various sources. Ensure data quality, integrity, and security across AWS services. Collaborate with data analysts, data scientists, and business stakeholders to deliver data solutions. Monitor and troubleshoot ETL jobs and cloud infrastructure performance. Automate data workflows and integrate with CI/CD pipelines. Required Skills & Qualifications: Hands-on experience with AWS Glue, Athena, and Redshift. Strong programming skills in Python and PySpark. Experience with ETL design, implementation, and optimization. Familiarity with S3, Lambda, CloudWatch, and other AWS services. Understanding of data warehousing concepts and performance tuning in Redshift. Experience with schema design, partitioning, and query optimization in Athena. Proficiency in version control (Git) and agile development practices. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 2 weeks ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Manager, Scientific Data Engineering The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Join a team that is passionate about using data, analytics, and insights to drive decision-making and create custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company's IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps ensure we can manage and improve each location, from investing in the growth, success, and well-being of our people to making sure colleagues from each IT division feel a sense of belonging, to managing critical emergencies. Together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Design, develop, and maintain data pipelines to extract data from various sources and populate a data lake and data warehouse. Work closely with data scientists, analysts, and business teams to understand data requirements and deliver solutions aligned with business goals. Build and maintain platforms that support data ingestion, transformation, and orchestration across various data sources, both internal and external. Use data orchestration, logging, and monitoring tools to build resilient pipelines. Automate data flows and pipeline monitoring to ensure scalability, performance, and resilience of the platform. Monitor, troubleshoot, and resolve issues related to the data integration platform, ensuring uptime and reliability. Maintain thorough documentation for integration processes, configurations, and code to ensure easy onboarding for new team members and future scalability. Develop pipelines to ingest data into cloud data warehouses. Establish, modify and maintain data structures and associated components. Create and deliver standard reports in accordance with stakeholder needs and conforming to agreed standards. Work within a matrix organizational structure, reporting to both the functional manager and the project manager. Participate in project planning, execution, and delivery, ensuring alignment with both functional and project goals. What Should You Have Bachelors’ degree in Information Technology, Computer Science or any Technology stream. 3+ years of developing data pipelines & data infrastructure, ideally within a drug development or life sciences context. Demonstrated expertise in delivering large-scale information management technology solutions encompassing data integration and self-service analytics enablement. Experienced in software/data engineering practices (including versioning, release management, deployment of datasets, agile & related software tools). Ability to design, build and unit test applications on Spark framework on Python. Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on Databricks/ Hadoop. Experience working with storage frameworks like Delta Lake/ Iceberg Experience working with MPP Datawarehouse’s like Redshift Cloud-native, ideally AWS certified. Strong working knowledge of at least one Reporting/Insight generation technology Good interpersonal and communication skills (verbal and written). Proven record of delivering high-quality results. Product and customer-centric approach. Innovative thinking, experimental mindset. Skills Mandatory Skills Foundational Data Concepts SQL (Intermediate / Advanced) Python (Intermediate) Cloud Fundamentals (AWS Focus) AWS Console, IAM roles, regions, concept of cloud computing AWS S3 Data Processing & Transformation Apache Spark (Concepts & Usage) Databricks (Platform Usage), Unity Catalog, Delta Lake ETL & Orchestration AWS Glue (ETL, Catalog), Lambda Apache Airflow (DAGs and Orchestration) or other orchestration tool dbt (Data Build Tool) Matillion (or similar ETL tool) Data Storage & Querying Amazon Redshift / Azure Synapse Trino / Equivalent AWS Athena / Query Federation Data Quality & Governance Data Quality Concepts / Implementation Data Observability Concepts Collibra / equivalent tool Real-time / Streaming Apache Kafka (Concepts & Usage) DevOps & Automation CI / CD concepts, Pipelines (GitHub Actions / Jenkins / Azure DevOps) Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business, Business Data Analytics, Business Intelligence (BI), Collaborative Development, Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Engineering Design, Engineering Management, Information Management, Management Process, Product Lifecycle, Project Engineering, Project Management Engineering, Scientific Data Management, Social Collaboration, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 08/20/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R350700
Posted 2 weeks ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description Calling all originals: At Levi Strauss & Co., you can be yourself — and be part of something bigger. We’re a company of people who like to forge our own path and leave the world better than we found it. Who believe that what makes us different makes us stronger. So add your voice. Make an impact. Find your fit — and your future. Levi’s is looking for a seasoned Analyst and Data Visualization Expert to support the data science and analytics team focused on supply chain operation data products. The ideal candidate is a great storyteller and strong technical contributor who has experience solving business problems with data-driven tools. You will be responsible for delivering a suite of analytical products including analyses, dashboards, insights, and recommendations. You will collect, analyze, and present data to improve strategic decision making and to track the benefits unlocked by the data products. You should have a high degree of curiosity about the business and the skills to discover impactful insights from data. You should be able to communicate those insights in a way that builds confidence and enables decisions that drive business value. Responsibilities Dive deep into complex business problems and provide insights on digital features, or AI model performance. Partner with cross functional teams on implementation. Create dashboards to track adoption and business impact of features and data product launched. Bring data to life through storytelling in a clear and meaningful way to audiences with mixed levels of technical expertise and informing key strategic decisions. Partner with data science, data analytics and product managers on planning, goal setting and prioritization. Partner with data and engineering teams to improve instrumentation and product/models health reporting, pre/post analysis and business debugging. Partner with A/B test team to design experiments and post-hoc analysis of results. Promote a culture of data driven technical excellence, ownership, collaboration. Support business stakeholders and regional data analysts by understanding their needs and providing guidance and support. Promote adoption of new digital tools to standardize analytics technical suites across global business. Your background 8+ years of professional experience analyzing complex data, drawing conclusions, and making recommendations. 6+ years of applied data visualization experience. GCP Looker experience preferred. (Tableau or PowerBI experience are also acceptable) 4+ years of experience in extracting & manipulating large data sets from various relational databases using SQL (Amazon Redshift, Oracle, Google BigQuery). Coding skills in at least one statistical or programming language (R or Python preferred) to import, summarize, and analyze data. Hands on experience working with big data, ideally clickstream data, in big data volume sectors such as ecommerce, gaming, social network etc. ecommerce platforms are a plus. Ability to translate and present complex analysis in executive summaries. Clear and effective written and verbal communication and strong interpersonal skills. Strong problem solving skills. Bachelor's in economics, Statistics, Data, Data Science and Engineering (Masters is a plus) or equivalent experience. Benefits We put a lot of thought into our programmes to provide you with a benefits package that matters. Whether it is for medical care, taking time off, improving your health or planning for retirement, we've got you covered. Here's a Small Snapshot Complimentary preventive health check-up for you & your spouse OPD coverage Best in class leave plan including paternity and family care leaves Counselling sessions to prioritise mental well-being Exclusive discount vouchers on Levi’s products We are an Equal Opportunity Employer committed to empowering individuals from all walks of life to achieve their professional goals with us, regardless of race, religion, gender, gender identity, pregnancy, disability, sexual orientation, age, national origin, citizenship status, or genetic information. We actively seek and encourage applications from diverse candidates, including those with disabilities, and offer accommodations throughout the selection process upon request. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. LOCATION India, Bangalore - Office FULL TIME/PART TIME Full time Current LS&Co Employees, apply via your Workday account.
Posted 2 weeks ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary: We are seeking a skilled Data Engineer to join our dynamic team. In this role, will be responsible for implementing and maintaining scalable data pipelines and infrastructure on AWS cloud platform. The ideal candidate will have experience with AWS services, particularly in the realm of big data processing and analytics. The role involves working closely with cross-functional teams to support data-driven decision-making and focus on delivering business objectives while improving efficiency and ensuring high service quality. Key Responsibilities: Design, develop, and maintain large-scale data pipelines that can handle large datasets from multiple sources. Knowledge of real-time data replication and batch processing of data using distributed computing platforms like Spark, Kafka, etc. Optimize performance of data processing jobs and ensure system scalability and reliability. Collaborate with DevOps teams to manage infrastructure, including cloud environments like AWS Collaborate with data scientists, analysts, and business stakeholders to develop tools and platforms that enable advanced analytics and reporting. Lead and mentor junior data engineers, providing guidance on best practices, code reviews, and technical solutions. Evaluating and implementing new frameworks, tools for data engineering Strong analytical and problem-solving skills with attention to detail. To maintain a healthy working relationship with the business partners/users and other MLI departments Responsible for overall performance, cost and delivery of technology solutions Key Technical competencies/skills required: Hands-on experience with AWS services such as S3, DMS, Lambda, EMR, Glue, Redshift,RDS (Postgres) Athena, Kinesics, etc. Expertise in data modelling and knowledge of modern file and table formats. Proficiency in programming languages such as Python, PySpark, SQL/PLSQL for implementing data pipelines and ETL processes. Experience data architecting or deploying Cloud/Virtualization solutions (Like Data Lake, EDW, Mart ) in enterprise Knowledge of modern data stack and keeping the technology stack refreshed. Knowledge of DevOps to perform CI/CD for data pipelines. Knowledge of Data Observability, automated data lineage and metadata management would be an added advantage. Cloud/hybrid cloud (preferable AWS) solution for data strategy for Data lake, BI and Analytics Set-up logging, monitoring, alerting, dashboards for cloud solution and data solution Experience with data warehousing concepts. Desired qualifications and experience: Bachelor’s degree in Computer Science, Engineering, or related field (Master’s preferred). Proven experience of 7+ years as a Data Engineer or similar role with a strong focus on AWS cloud Strong analytical and problem-solving skills with attention to detail. Excellent communication and collaboration skills. AWS certifications (e.g., AWS Certified Big Data - Specialty) are a plus
Posted 2 weeks ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We’re the world’s leading data, insights, and consulting company; we shape the brands of tomorrow by better understanding people everywhere. Kantar’s Profiles division is home to the world’s largest audience network. With access to 170m+ people in over 100 global markets, we offer unrivalled global reach with local relevancy. Validated by industry leading anti-fraud technology, Kantar’s Profiles Audience Network delivers the most meaningful data with consistency, accuracy, and accountability – all at speed and scale. Job Details Join Our Data Science Team as a Mid-Level Data Analyst! We are seeking candidates with experience in a data-related role, possessing a solid foundation in programming (Python and SQL) and a passion for redefining data into actionable insights. You will join our data science department and work directly with our Senior Data Scientists. Why This Job is Important This role is crucial for ensuring the functionality and performance of our data ecosystem. By analyzing user acquisition and retention data, you will help identify and resolve weaknesses and bugs in existing models, ultimately contributing to the improvement of our technologies. What You’ll Be Doing Monitoring and Maintenance: Lead all aspects of alerts to ensure the ecosystem's functionality, working with existing models for day-to-day operations and performance. Data Analysis: Analyze user acquisition and retention data, identifying weaknesses and bugs in existing models for resolution. Analytics and Machine Learning: Run analytics to extract statistics, patterns, and design machine learning models to improve existing technologies. Collaboration: Work closely with the development team, contributing to data/statistics tasks for improving user engagement. Collaborate with various stakeholders to formulate pertinent questions and provide actionable insights. Data Transformation: Employ ETL tools like DBT to transform data, making it more accessible to the broader business. Use time series graphing services such as Grafana to create visualizations, monitor trends, and identify patterns. The Ideal Skills & Experience Experience with data-related technologies, including database queries, programming, data mining/wrangling, analysis, and reporting. Strong proficiency in SQL, with the ability to read, write, and query effectively. A keen curiosity about data, statistics, machine learning, and data science. Strong problem-solving skills with an emphasis on product development, logical thinking, and critical analysis. Experience with statistical computer languages such as Python, Scala, R, MATLAB. Knowledge of statistical techniques and concepts, including regression, properties of distributions, statistical tests, and proper usage. Experience using web services and languages, including AWS, EC2, S3, Redshift, DigitalOcean, etc. Meticulous, with a good work ethic and the ability to collaborate across different teams. Experience in Excel and Power BI is a plus. Why Join Kantar? You’ll be joining our technology team, right in the middle of our tech revolution. We’re undergoing the largest technology transformation Kantar has ever seen, investing in new AI and cloud technologies. By modernizing all our tech systems, we can respond to our clients' needs faster and more efficiently – and keep Kantar as a market leader for insights. About Kantar We shape the brands of tomorrow by better understanding people everywhere. By understanding people, we can understand what drives their decisions, actions, and aspirations on a global scale. And if we combine the expertise of our people with the latest AI technology, we can really help brands discover some amazing insights. We prioritize equality of opportunity for everyone and support our colleagues to work in a way that works for them. We encourage applications from all backgrounds and sections of society. Even if you feel like you’re not an exact match, we’d love to receive your application and talk to you about this job or others with us. Country India Why join Kantar? We shape the brands of tomorrow by better understanding people everywhere. By understanding people, we can understand what drives their decisions, actions, and aspirations on a global scale. And by amplifying our in-depth expertise of human understanding alongside ground-breaking technology, we can help brands find concrete insights that will help them succeed in our fast-paced, ever shifting world. And because we know people, we like to make sure our own people are being looked after as well. Equality of opportunity for everyone is our highest priority and we support our colleagues to work in a way that supports their health and wellbeing. While we encourage teams to spend part of their working week in the office, we understand no one size fits all; our approach is flexible to ensure everybody feels included, accepted, and that we can win together. We’re dedicated to creating an inclusive culture and value the diversity of our people, clients, suppliers and communities, and we encourage applications from all backgrounds and sections of society. Even if you feel like you’re not an exact match, we’d love to receive your application and talk to you about this job or others at Kantar.
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
You will work closely with business IT partners to gain a deep understanding of business and data requirements. Your responsibilities will include acquiring data from primary or secondary sources, conducting data profiling, and interpreting data to provide quick analysis. You will be expected to identify trends and patterns in complex datasets, develop business matrices, proof of concepts, source mapping documents, and raw source data models. Additionally, you will play a key role in identifying and defining opportunities for process improvement. To excel in this role, you must possess a strong proficiency in Excel, databases (such as Redshift, Oracle), and programming languages (such as Python and R). Your ability to write SQL queries for data profiling, analysis, and presenting insights to stakeholders will be crucial. Familiarity with Datawarehousing and Data Modelling concepts, as well as exposure to the IT Project Lifecycle, will be highly beneficial. Previous experience in the Finance or Life Science domains is preferred. Knowledge of BI tools would be an added advantage for this position.,
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
maharashtra
On-site
As a Solutions Architect with over 7 years of experience, you will have the opportunity to leverage your expertise in cloud data solutions to architect scalable and modern solutions on AWS. In this role at Quantiphi, you will be a key member of our high-impact engineering teams, working closely with clients to solve complex data challenges and design cutting-edge data analytics solutions. Your responsibilities will include acting as a trusted advisor to clients, leading discovery/design workshops with global customers, and collaborating with AWS subject matter experts to develop compelling proposals and Statements of Work (SOWs). You will also represent Quantiphi in various forums such as tech talks, webinars, and client presentations, providing strategic insights and solutioning support during pre-sales activities. To excel in this role, you should have a strong background in AWS Data Services including DMS, SCT, Redshift, Glue, Lambda, EMR, and Kinesis. Your experience in data migration and modernization, particularly with Oracle, Teradata, and Netezza to AWS, will be crucial. Hands-on experience with ETL tools such as SSIS, Informatica, and Talend, as well as a solid understanding of OLTP/OLAP, Star & Snowflake schemas, and data modeling methodologies, are essential for success in this position. Additionally, familiarity with backend development using Python, APIs, and stream processing technologies like Kafka, along with knowledge of distributed computing concepts including Hadoop and MapReduce, will be beneficial. A DevOps mindset with experience in CI/CD practices and Infrastructure as Code is also desired. Joining Quantiphi as a Solutions Architect is more than just a job it's an opportunity to shape digital transformation journeys and influence business strategies across various industries. If you are a cloud data enthusiast looking to make a significant impact in the field of data analytics, this role is perfect for you.,
Posted 2 weeks ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
DailyObjects is a homegrown brand that creates aspirational everyday products designed to enhance modern lifestyles. Proudly designed and made in India, DailyObjects brings quality designer and Indian craftsmanship to the world. With over 30,000 styles in a dozen accessories categories, our products are loved by over 2 million customers globally. At DailyObjects, we are committed to designing exceptional products that blend distinctive aesthetics with practical functionality. We are a fast-growing D2C brand with a dynamic culture of innovation, adaptability, and excellence. We are looking for a talented 3D Designer who can bring products to life through detailed, photorealistic 3D renders, animations, and mockups. You will play a key role in visualizing products before they are physically manufactured and in creating compelling content for marketing, e-commerce, and social media. Responsibilities Develop high-quality 3D models, renders, and animations of lifestyle and tech accessory products Collaborate with the product design, marketing, and UI/UX teams to visualize concepts, create mockups, and enhance customer experience Prepare 3D assets for product listing pages, AR previews, and promotional materials Ensure models are optimized for performance without compromising on quality Maintain file organization and asset libraries Stay updated with latest 3D design tools, trends, and best practices Requirements Bachelor's degree or diploma in Design, Animation, 3D Modelling, or related field Proficiency in 3D software like Blender, Cinema 4D, Maya, or 3ds Max Knowledge of rendering engines (Keyshot, V-Ray, Redshift, etc.) Understanding of texture mapping, lighting, and material creation Basic knowledge of Adobe Creative Suite (Photoshop, Illustrator) Strong visual sense and attention to detail Ability to manage time and multiple projects simultaneously Experience in e-commerce or lifestyle product rendering is a plus (ref:hirist.tech)
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
You will be responsible for working with AWS CDK using Type Script and CloudFormation template to manage various AWS services such as Redshift, Glue, IAM roles, KMS keys, Secrets Manager, Airflow, SFTP, AWS Lambda, S3, and Event Bridge. Your tasks will include executing grants, store procedures, queries, and Redshift Spectrum to query S3, defining execution roles, debugging jobs, creating IAM roles with fine-grained access, integrating and deploying services, managing KMS keys, configuring Secrets Manager, creating Airflow DAGs, executing serverless AWS Lambda functions, debugging Lambda functions, managing S3 object storage including lifecycle configuration, resource-based policies, and encryption, and setting up event triggers using Lambda Event Bridge with rules. You should have knowledge of AWS Redshift SQL workbench for executing grants and a strong understanding of networking concepts, security, and cloud architecture. Experience with monitoring tools like CloudWatch and familiarity with containerization tools like Docker and Kubernetes would be beneficial. Strong problem-solving skills and the ability to thrive in a fast-paced environment are essential. Virtusa is a company that values teamwork, quality of life, and professional and personal development. With a global team of 27,000 professionals, Virtusa is committed to supporting your growth by providing exciting projects, opportunities to work with cutting-edge technologies, and a collaborative team environment that encourages the exchange of ideas and excellence. At Virtusa, you will have the chance to work with great minds and unleash your full potential in a dynamic and innovative workplace.,
Posted 2 weeks ago
5.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
About Us We are a fast-growing Direct-to-Consumer (D2C) company revolutionizing how customers interact with our products. Our data-driven approach is at the core of our business strategy, enabling us to make informed decisions that enhance customer experience and drive business growth. We're looking for a talented Senior Data Engineer to join our team and help shape our data infrastructure for the future. Role Overview As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Key Responsibilities Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources Build and support data pipelines for reporting, analytics, and machine learning applications Implement and manage streaming data solutions using Kafka and other technologies Design and optimize database schemas and data models in ClickHouse and other databases Develop and maintain data workflows using Apache Airflow and similar orchestration tools Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks Collaborate with data scientists to implement ML infrastructure for model training and deployment Ensure data quality, reliability, and security across all data platforms Monitor data pipelines and implement proactive alerting systems Troubleshoot and resolve data infrastructure issues Document data flows, architectures, and processes Mentor junior data engineers and contribute to establishing best practices Stay current with industry trends and emerging technologies in data engineering Qualifications Required : Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred) 5+ years of experience in data engineering roles Strong expertise in AWS and/or GCP cloud platforms and services Proficiency in building data pipelines using modern ETL/ELT tools and frameworks Experience with stream processing technologies such as Kafka Hands-on experience with ClickHouse or similar analytical databases Strong programming skills in Python and experience with PySpark Experience with workflow orchestration tools like Apache Airflow Solid understanding of data modeling, data warehousing concepts, and dimensional modeling Knowledge of SQL and NoSQL databases Strong problem-solving skills and attention to detail Excellent communication skills and ability to work in cross-functional teams Preferred Experience in D2C, e-commerce, or retail industries Knowledge of data visualization tools (Tableau, Looker, Power BI) Experience with real-time analytics solutions Familiarity with CI/CD practices for data pipelines Experience with containerization technologies (Docker, Kubernetes) Understanding of data governance and compliance requirements Experience with MLOps or ML engineering Technologies Cloud Platforms: AWS (S3, Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc) Data Processing: Apache Spark, PySpark, Python, SQL Streaming: Apache Kafka, Kinesis Data Storage: ClickHouse, S3, BigQuery, PostgreSQL, MongoDB Orchestration: Apache Airflow Version Control: Git Containerization: Docker, Kubernetes (optional) What We Offer Competitive salary and comprehensive benefits package Opportunity to work with cutting-edge data technologies Professional development and learning opportunities Modern office in Mumbai with great amenities Collaborative and innovation-driven culture Opportunity to make a significant impact on company growth (ref:hirist.tech)
Posted 2 weeks ago
5.0 - 9.0 years
0 Lacs
ahmedabad, gujarat
On-site
As an AWS Data Engineer at Sufalam Technologies, located in Ahmedabad, India, you will be responsible for designing and implementing data engineering solutions on AWS. Your role will involve developing data models, managing ETL processes, and ensuring the efficient operation of data warehousing solutions. Collaboration with Finance, Data Science, and Product teams is crucial to understand reconciliation needs and ensure timely data delivery. Your expertise will contribute to data analytics activities supporting business decision-making and strategic goals. Key responsibilities include designing and implementing scalable and secure ETL/ELT pipelines for processing financial data. Collaborating closely with various teams to understand reconciliation needs and ensuring timely data delivery. Implementing monitoring and alerting for pipeline health and data quality, maintaining detailed documentation on data flows, models, and reconciliation logic, and ensuring compliance with financial data handling and audit standards. To excel in this role, you should have 5-6 years of experience in data engineering with a strong focus on AWS data services. Hands-on experience with AWS Glue, Lambda, S3, Redshift, Athena, Step Functions, Lake Formation, and IAM is essential for secure data governance. A solid understanding of data reconciliation processes in the finance domain, strong SQL skills, experience with data warehousing and data lakes, and proficiency in Python or PySpark for data transformation are required. Knowledge of financial accounting principles or experience working with financial datasets (AR, AP, General Ledger, etc.) would be beneficial.,
Posted 2 weeks ago
4.0 - 8.0 years
0 Lacs
ahmedabad, gujarat
On-site
DXFactor is a US-based tech company working with customers globally. We are a certified Great Place to Work and currently seeking candidates for the role of Data Engineer with 4 to 6 years of experience. Our presence spans across the US and India, specifically in Ahmedabad. As a Data Engineer at DXFactor, you will be expected to specialize in SnowFlake, AWS, and Python. Key Responsibilities: - Design, develop, and maintain scalable data pipelines for both batch and streaming workflows. - Implement robust ETL/ELT processes to extract data from diverse sources and load them into data warehouses. - Build and optimize database schemas following best practices in normalization and indexing. - Create and update documentation for data flows, pipelines, and processes. - Collaborate with cross-functional teams to translate business requirements into technical solutions. - Monitor and troubleshoot data pipelines to ensure optimal performance. - Implement data quality checks and validation processes. - Develop and manage CI/CD workflows for data engineering projects. - Stay updated with emerging technologies and suggest enhancements to existing systems. Requirements: - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 4+ years of experience in data engineering roles. - Proficiency in Python programming and SQL query writing. - Hands-on experience with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Cassandra). - Familiarity with data warehousing technologies such as Snowflake, Redshift, and BigQuery. - Demonstrated ability in constructing efficient and scalable data pipelines. - Practical knowledge of batch and streaming data processing methods. - Experience in implementing data validation, quality checks, and error handling mechanisms. - Work experience with cloud platforms, particularly AWS (S3, EMR, Glue, Lambda, Redshift) and/or Azure (Data Factory, Databricks, HDInsight). - Understanding of various data architectures including data lakes, data warehouses, and data mesh. - Proven ability to debug complex data flows and optimize underperforming pipelines. - Strong documentation skills and effective communication of technical concepts.,
Posted 2 weeks ago
8.0 - 12.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for building the most personalized and intelligent news experiences for India's next 750 million digital users. As Our Principal Data Engineer, your main tasks will include designing and maintaining data infrastructure to power personalization systems and analytics platforms. This involves ensuring seamless data flow from source to consumption, architecting scalable data pipelines to process massive volumes of user interaction and content data, and developing robust ETL processes for large-scale transformations and analytical processing. You will also be involved in creating and maintaining data lakes/warehouses that consolidate data from multiple sources, optimized for ML model consumption and business intelligence. Additionally, you will implement data governance practices and collaborate with the ML team to ensure the right data availability for recommendation systems. To excel in this role, you should have a Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field, along with 8-12 years of data engineering experience, including at least 3 years in a senior role. You must possess expert-level SQL skills and have strong experience in the Apache Spark ecosystem (Spark SQL, Streaming, SparkML), as well as proficiency in Python/Scala. Experience with the AWS data ecosystem (RedShift, S3, Glue, EMR, Kinesis, Lambda, Athena) and ETL frameworks (Glue, Airflow) is essential. A proven track record of building large-scale data pipelines in production environments, particularly in high-traffic digital media, will be advantageous. Excellent communication skills are also required, as you will need to collaborate effectively across teams in a fast-paced environment that demands engineering agility.,
Posted 2 weeks ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title: Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Job Description We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics. Responsibilities Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. Collaborate with analysts to understand data requirements and ensure data availability and quality. Write and optimize SQL queries for data extraction, transformation, and loading. Utilize Git for version control, ensuring proper documentation and tracking of code changes. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. Automate workflows using AWS Cloud services like event bridge, step functions. Monitor and optimize data processing workflows for performance and scalability. Troubleshoot data-related issues and provide timely resolution. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. Strong proficiency in PySpark and Python for data processing and analysis. Proficiency in SQL for data manipulation and querying. Experience with version control systems, preferably Git. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. Familiarity with Databricks and it’s concepts. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively within a team. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills Knowledge of data warehousing concepts and data modeling. Familiarity with big data technologies like Hadoop and Spark. AWS certifications related to data engineering.
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As a skilled Data Engineer with 7-10 years of experience, you will be a valuable addition to our dynamic team in India. Your primary focus will involve designing and optimizing data pipelines to efficiently handle large datasets and extract valuable business insights. Your responsibilities will include designing, building, and maintaining scalable data pipelines and architecture. You will be expected to develop and enhance ETL processes for data ingestion and transformation, collaborating closely with data scientists and analysts to meet data requirements and deliver effective solutions. Monitoring data integrity through data quality checks and ensuring compliance with data governance and security policies will also be part of your role. Leveraging cloud-based data technologies and services for storage and processing will be crucial to your success in this position. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Proficiency in SQL and practical experience with databases such as MySQL, PostgreSQL, or Oracle is essential. Your expertise in programming languages like Python, Java, or Scala will be highly valuable, along with hands-on experience in big data technologies like Hadoop, Spark, or Kafka. Familiarity with cloud platforms such as AWS, Azure, or Google Cloud is preferred. Understanding data warehousing concepts and tools such as Redshift and Snowflake, coupled with experience in data modeling and architecture design, will further strengthen your candidacy.,
Posted 2 weeks ago
7.0 - 12.0 years
0 Lacs
karnataka
On-site
As an Infrastructure Engineer or Lead at our global financial firm's GCC in Bengaluru, you will play a crucial role in constructing a secure, scalable, and high-performance infrastructure to support institutional capital at scale. This is a unique opportunity to contribute to a platform-first culture alongside top technologists, where you will be involved in establishing the core infrastructure from the ground up. We are currently seeking talented individuals to fill various specialized roles within our Infrastructure Engineering team: 1. Network Engineers & Leads: - Proficiency in technologies such as Cisco, Arista, Palo Alto, and Juniper - Experience in low-latency trading networks, BGP/OSPF, and co-location setups - Preference for expertise in infrastructure automation 2. Linux Engineer & Lead (Red Hat / OpenShift): - Extensive experience with Red Hat Linux and OpenShift - Skills in performance tuning, kernel optimization, and strong scripting/debugging abilities - Emphasis on being more of an engineer than an administrator 3. Storage Engineer & Lead: - Knowledge in Block & File (EMC SRDF, NetApp), and Object Store - Expertise in load balancing, resiliency, and disaster recovery - Experience with NewtonX is advantageous 4. Cloud Engineer & Lead: - Proficient in AWS workload deployment & migration - Strong understanding of VMware, cloud optimization, and architecture - Essential to have an infrastructure performance mindset 5. Automation Engineer & Lead: - Familiarity with tools like Terraform, Puppet, AWS/GCP, and CI/CD - Strong Python development skills for building Infrastructure as Code - Not a pure DevOps role; we are looking for individuals who can build solutions 6. Database Administrator & Lead: - Expertise in RDBMS & NoSQL databases such as Oracle, SQL, Redshift, and Neptune - Experience in distributed systems, high availability, tuning, and recovery workflows 7. Windows Engineer & Lead: - Proficient in Active Directory, domain services, and system hardening - Specialization in enterprise-scale identity & access management 8. Data Recovery Engineer & Lead: - Leading recovery efforts for global production issues - Conducting deep root cause analysis and workflow optimization - Ability to handle high-stakes and high-pressure coordination scenarios Location: Bengaluru Culture: Product-minded, Security-first, Technically elite If you are passionate about infrastructure engineering and want to be part of a dynamic team that values innovation and excellence, we encourage you to apply for one of these exciting roles with us.,
Posted 2 weeks ago
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
You are a highly skilled Architect with expertise in Snowflake Data Modeling and Cloud Data solutions. With over 12 years of experience in Data Modeling/Data Warehousing and 5+ years specifically in Snowflake, you will lead Snowflake optimizations at warehouse and database levels. Your role involves setting up, configuring, and deploying Snowflake components efficiently for various projects. You will work with a passionate team of engineers at ValueMomentum's Engineering Center, focused on transforming the P&C insurance value chain through innovative solutions. The team specializes in Cloud Engineering, Application Engineering, Data Engineering, Core Engineering, Quality Engineering, and Domain expertise. As part of the team, you will have opportunities for role-specific skill development and contribute to impactful projects. As an Architect, you will be responsible for optimizing Snowflake at both warehouse and database levels, setting up and configuring Snowflake components, and implementing cloud management frameworks. Proficiency in Python, PySpark, SQL, and experience with cloud platforms such as AWS, Azure, and GCP are essential for this role. Key Responsibilities: - Work on Snowflake optimizations at warehouse and database levels. - Setup, configure, and deploy Snowflake components like Databases, Warehouses, and Roles. - Setup and monitor data shares and Snow Pipes for Snowflake projects. - Implement Snowflake Cloud management frameworks for monitoring, alerting, governance, budgets, change management, and cost optimization. - Develop cloud usage reporting for cost-related insights, metrics, and KPIs. - Build and enhance Snowflake forecasting processes and explore cloud spend trends. Requirements: - 12+ years of experience in Data Modeling/Data Warehousing. - 5+ years of experience in Snowflake Data Modeling and Architecture, including expertise in Cloning, Data Sharing, and Search optimization. - Proficiency in Python, PySpark, and complex SQL for analysis. - Experience with cloud platforms like AWS, Azure, and GCP. - Knowledge of Snowflake performance management and cloud-based database role management. ValueMomentum is a leading solutions provider for the global property and casualty insurance industry. It focuses on helping insurers achieve sustained growth, high performance, and stakeholder value. The company has served over 100 insurers and is dedicated to fostering resilient societies. Benefits at ValueMomentum include a competitive compensation package, career advancement opportunities through coaching and mentoring programs, comprehensive training and certification programs, and performance management with goal setting, continuous feedback, and rewards for exceptional performers.,
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
You will be responsible for leading and managing the delivery of projects as well as achieving project and team goals. Your tasks will include building and supporting data ingestion and processing pipelines, designing and maintaining machine learning infrastructure, and leading client engagement on technical projects. You will define project scopes, track progress, and allocate work to the team. It will be essential to stay updated on big data technologies and conduct pilots to design scalable data architecture. Collaboration with software engineering teams to drive multi-functional projects to completion will also be a key aspect of your role. To excel in this position, we expect you to have a minimum of 6 years of experience in data engineering with at least 2 years in a leadership role. Experience working with global teams and remote clients is required. Hands-on experience in building data pipelines across various infrastructures, knowledge of statistical and machine learning techniques, and the ability to integrate machine learning into data pipelines are essential. Proficiency in advanced SQL, data warehousing concepts, and DataMart designing is necessary. Strong familiarity with modern data platform components like Spark and Python, as well as experience with Data Warehouses (e.g., Google BigQuery, Redshift, Snowflake) and Data Lakes (e.g., GCS, AWS S3) is expected. Experience in setting up and maintaining data pipelines with AWS Glue, Azure Data Factory, and Google Dataflow, along with relational SQL and NoSQL databases, is also required. Excellent problem-solving and communication skills are essential for this role.,
Posted 2 weeks ago
90.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At Allstate, great things happen when our people work together to protect families and their belongings from life’s uncertainties. And for more than 90 years our innovative drive has kept us a step ahead of our customers’ evolving needs. From advocating for seat belts, air bags and graduated driving laws, to being an industry leader in pricing sophistication, telematics, and, more recently, device and identity protection. Job Description The Software Engineer Lead Consultant architects and designs their digital products using modern tools, technologies, frameworks, and systems. They apply a systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software. They own and manage running their application in production, and ultimately becomes accountable for the success of their digital products through achieving KPIs. Job Title: Senior Software Engineer About Arity And Our Ad Platform Team Arity, a technology company founded by Allstate, is transforming transportation by leveraging one of the largest driving behavior databases globally. Arity’s ad platform team plays a key role in the programmatic advertising ecosystem, specifically via Arity PMP (Private Marketplace), which offers brands a unique way to reach highly targeted audiences based on driving behaviors and predictive analytics. Our team uses advanced telematics data to help insurers, advertisers, and transportation companies optimize strategies while enhancing customer experiences and reducing operational costs. Job Description We are seeking a highly skilled Senior Software Engineer with 8 years of experience in software development, particularly in the .NET stack, React and AWS. The ideal candidate will have hands-on experience building and scaling microservices in a high-traffic environment. They will work closely with a high-performing team, contributing to the design, development, and deployment of our cutting-edge ad platform while expanding their knowledge of modern technologies like React, Go, and telematics-based programmatic advertising. Key Responsibilities Collaborate with Architects, Engineers, and Business stakeholders to understand technical and business requirements and deliver scalable solutions. Design, develop, and maintain microservices using C#, Go, React and AWS services like Lambda, S3, and RDS. Participate in code reviews, design discussions, and team retrospectives to foster a collaborative and high-performance engineering culture. Build and enhance CI/CD pipelines to ensure reliable and secure deployments. Implement performance monitoring and optimization practices to ensure the reliability of high-transaction systems. Expand technical expertise in modern stacks, including React and Go. Experience & Qualifications 4-8 years of professional experience in Microsoft .NET and C# development. Proficiency in building and maintaining cloud-native applications, preferably on AWS. Experience designing, developing, and deploying microservices in a high-traffic or real-time environment. Experience in frontend technologies like React, CSS, HTML, JavaScript. Familiarity with database technologies such as Redis, DynamoDB, RedShift is a plus. Strong problem-solving skills, with experience working in agile, cross-functional teams. Exposure to ad-tech or telematics is a plus, with a keen interest in programmatic advertising. Why Join Us? Be part of a team that is transforming how businesses leverage driving behavior data for smarter advertising. Work in a collaborative, innovative, and growth-oriented environment that values learning and technical excellence. Opportunities to work on advanced cloud-native architectures and cutting-edge technologies like React, Go, and big data tools. Primary Skills Customer Centricity, Digital Literacy, Inclusive Leadership, Learning Agility, Results-Oriented Shift Time Recruiter Info Yateesh B G ybgaa@allstate.com About Allstate The Allstate Corporation is one of the largest publicly held insurance providers in the United States. Ranked No. 84 in the 2023 Fortune 500 list of the largest United States corporations by total revenue, The Allstate Corporation owns and operates 18 companies in the United States, Canada, Northern Ireland, and India. Allstate India Private Limited, also known as Allstate India, is a subsidiary of The Allstate Corporation. The India talent center was set up in 2012 and operates under the corporation's Good Hands promise. As it innovates operations and technology, Allstate India has evolved beyond its technology functions to be the critical strategic business services arm of the corporation. With offices in Bengaluru and Pune, the company offers expertise to the parent organization’s business areas including technology and innovation, accounting and imaging services, policy administration, transformation solution design and support services, transformation of property liability service design, global operations and integration, and training and transition. Learn more about Allstate India here.
Posted 2 weeks ago
8.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description The ideal candidate must possess strong communication skills, with an ability to listen, comprehend information, and share it with all the key stakeholders, highlighting opportunities for improvement and concerns, if any. He/she must be able to work collaboratively with teams to execute tasks within defined timeframes while maintaining high-quality standards and superior service levels. The ability to take proactive actions and willingness to take up responsibility beyond the assigned work area is a plus. Business Intelligence Specialist with 8 years of progressive experience in driving marketing & campaign performance analytics. Adept at developing and managing BI dashboards, automating reporting frameworks and delivering actionable insights to optimize campaign ROI, engagement and conversions. Senior Analyst Roles And Responsibilities Dashboarding & Reporting: Built and maintained real-time dashboards and automated performance reports using Tableau / Power BI & SQL to ensure timely insights delivery. Performance Analytics & Insights Generation: Conducted funnel analysis, customer behavior modeling and trend identification to generate insights that enhanced marketing strategies. Cross-functional Collaboration: Worked with marketing, product and data teams to align on KPIs and translate business needs into analytical solutions and presentations. Technical And Functional Skills BI Tools: Tableau / Power BI, Looker Languages: SQL (Postgre SQL, MySQL), Python (Pandas, Matplotlib) / R Data Platforms: Google BigQuery / Snowflake / AWS Redshift Marketing Tools: Google Analytics / Adobe Analytics / Salesforce Marketing Cloud / Adobe Analytics
Posted 2 weeks ago
6.0 years
20 - 35 Lacs
Hyderabad Jubilee Ho, Hyderabad, Telangana
On-site
Data Engineer (6–8 Years) | Hyderabad, India | SaaS Product | MongoDB | Finance Automation Resourcedekho is hiring for a leading client in the agentic AI-based finance automation space. We’re looking for a passionate and experienced Data Engineer to join a high-impact team in Hyderabad. Why Join Us? Open budget for the right talent—compensation based on your expertise and interview performance. Work with cutting-edge technologies in a high-growth, product-driven environment. Collaborate with top minds from reputed institutions (IIT/IIM or similar). What You’ll Do: Design, build, and optimize robust data pipelines for ingesting, processing, and transforming data from diverse sources. Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, Jenkins . Develop and optimize SQL/NoSQL schemas, queries, and stored procedures for efficient data retrieval. Work with both relational (MySQL, PostgreSQL) and NoSQL (MongoDB, DocumentDB) databases. Design and implement scalable data warehouse solutions for analytics and ML applications. Collaborate with data scientists and ML engineers to prepare data for AI/ML models. Ensure data quality, monitoring, and alerting for accuracy and reliability. Optimize query performance through indexing, partitioning, and query refactoring. Maintain comprehensive documentation for data models, pipelines, and processes. Stay updated with the latest in data engineering tech and best practices. What We’re Looking For: 6+ years of experience in data engineering or related roles. Strong proficiency in SQL and experience with MySQL, PostgreSQL . Hands-on expertise with MongoDB (or AWS DocumentDB)— mandatory . Proven experience designing and optimizing ETL processes (Kafka, Debezium, Airflow, etc.). Solid understanding of data modeling, warehousing, and performance optimization. Experience with AWS data services (RDS, Redshift, S3, Glue, Kinesis, ELK stack). Proficiency in at least one programming language ( Python, Node.js, Java ). Experience with Git and CI/CD pipelines. Bachelor’s degree in Computer Science, Engineering, or related field. SaaS product experience is a must. Preference for candidates from reputed colleges (IIT/IIM or similar) and with stable career history. Bonus Points For: Experience with graph databases (Neo4j, Amazon Neptune). Knowledge of big data tech (Hadoop, Spark, Hive, data lakes). Real-time/streaming data processing. Familiarity with data governance, security, Docker, Kubernetes. FinTech or financial back-office domain experience. Startup/high-growth environment exposure. Ready to take your data engineering career to the next level? Apply now or reach out to us at career@resourceDekho.com to learn more! Please note: Only candidates with relevant SaaS product experience and strong MongoDB skills will be considered. Job Type: Full-time Pay: ₹2,000,000.00 - ₹3,500,000.00 per year Application Question(s): Have you completed your education from a premier institute such as IIT, IIM, IISc, NIT, IIIT-Hyderabad, or any other top-ranked institution in India? Location: Hyderabad Jubilee Ho, Hyderabad, Telangana (Required) Application Deadline: 22/07/2025 Expected Start Date: 18/08/2025
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: AWS Data Engineer Location: Hyderabad Experience: 8-10 Years Job Summary We are seeking an experienced Senior Data Engineer with a strong background in Python , PySpark , and the AWS ecosystem . The ideal candidate will have hands-on experience working with large-scale data pipelines, SQL performance tuning , and cloud-native architectures. You will be responsible for designing, developing, and optimizing complex ETL workflows using services like AWS Glue, EMR, Redshift, Athena, and Step Functions . Familiarity with messaging, database services, and CloudFormation is a plus. Must-Have Skills Strong hands-on experience in Python and PySpark for data engineering use cases. Proficient with AWS services: Glue, EMR, RDS, Redshift, Athena, Lambda, S3, Step Functions Expertise in complex SQL query writing and performance tuning on large datasets. Solid understanding of data warehousing, ETL/ELT pipelines, and cloud architecture.
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Entity: Technology Job Family Group: IT&S Group Job Description: You will work with You will be part of a high-energy, top-performing team of engineers and product managers, working alongside technology and business leaders to support the execution of transformative data initiatives that make a real impact. Let me tell you about the role As a Senior Data Platform Services Engineer, you will play a strategic role in shaping and securing enterprise-wide technology landscapes, ensuring their resilience, performance, and compliance. You will provide deep expertise in security, infrastructure, and operational excellence, driving large-scale transformation and automation initiatives. Your role will encompass platform architecture, system integration, cybersecurity, and operational continuity. You will be collaborating with engineers, architects, and business partners, working to establish robust governance models, technology roadmaps, and innovative security frameworks to safeguard critically important enterprise applications. What You Will Deliver Contribute to enterprise technology architecture, security frameworks, and platform engineering for our core data platform. Support end-to-end security implementation across our unified data platform, ensuring compliance with industry standards and regulatory requirements. Help drive operational excellence by supporting system performance, availability, and scalability. Contribute to modernization and transformation efforts, assisting in integration with enterprise IT systems. Assist in the design and execution of automated security monitoring, vulnerability assessments, and identity management solutions. Apply DevOps, CI/CD, and Infrastructure-as-Code (IaC) approaches to improve deployment and platform consistency. Support disaster recovery planning and high availability for enterprise platforms. Collaborate with engineering and operations teams to ensure platform solutions align with business needs. Provide guidance on platform investments, security risks, and operational improvements. Partner with senior engineers to support long-term technical roadmaps that reduce operational burden and improve scalability! What you will need to be successful (experience and qualifications) Technical Skills We Need From You Bachelor’s degree in technology, engineering, or a related technical discipline. 3–5 years of experience in enterprise technology, security, or platform operations in large-scale environments. Experience with CI/CD pipelines, DevOps methodologies, and Infrastructure-as-Code (e.g., AWS CDK, Azure Bicep). Knowledge of ITIL, Agile delivery, and enterprise governance frameworks. Proficiency with big data technologies such as Apache Spark, Hadoop, Kafka, and Flink. Experience with cloud platforms (AWS, GCP, Azure) and cloud-native data solutions (BigQuery, Redshift, Snowflake, Databricks). Strong skills in SQL, Python, or Scala, and hands-on experience with data platform engineering. Understanding of data modeling, data warehousing, and distributed systems architecture. Essential Skills Technical experience in Microsoft Azure, AWS, Databricks, and Palantir. Understanding of data ingestion pipelines, governance, security, and data visualization. Experience supporting multi-cloud data platforms at scale—balancing cost, performance, and resilience. Familiarity with performance tuning, data indexing, and distributed query optimization. Exposure to both real-time and batch data streaming architectures Skills That Set You Apart Proven success navigating global, highly regulated environments, ensuring compliance, security, and enterprise-wide risk management. AI/ML-driven data engineering expertise, applying intelligent automation to optimize workflows. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management {+ 4 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.
Posted 2 weeks ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Entity: Technology Job Family Group: IT&S Group Job Description: You will work with You will be part of a high-energy, top-performing team of engineers and product managers, working alongside technology and business leaders to support the execution of transformative data initiatives that make a real impact. Let Me Tell You About The Role As a Senior Data Tooling Services Engineer, you will play a strategic role in shaping and securing enterprise-wide technology landscapes, ensuring their resilience, performance, and compliance. You will provide deep expertise in security, infrastructure, and operational excellence, driving large-scale transformation and automation initiatives. Your role will encompass platform architecture, system integration, cybersecurity, and operational continuity. You will be collaborating with engineers, architects, and business partners, working to establish robust governance models, technology roadmaps, and innovative security frameworks to safeguard critically important enterprise applications. What You Will Deliver Contribute to enterprise technology architecture, security frameworks, and platform engineering for our core data platform. Support end-to-end security implementation across our unified data platform, ensuring compliance with industry standards and regulatory requirements. Help drive operational excellence by supporting system performance, availability, and scalability. Contribute to modernization and transformation efforts, assisting in integration with enterprise IT systems. Assist in the design and execution of automated security monitoring, vulnerability assessments, and identity management solutions. Apply DevOps, CI/CD, and Infrastructure-as-Code (IaC) approaches to improve deployment and platform consistency. Support disaster recovery planning and high availability for enterprise platforms. Collaborate with engineering and operations teams to ensure platform solutions align with business needs. Provide guidance on platform investments, security risks, and operational improvements. Partner with senior engineers to support long-term technical roadmaps that reduce operational burden and improve scalability! What you will need to be successful (experience and qualifications) Technical Skills We Need From You Bachelor’s degree in technology, engineering, or a related technical discipline. 3–5 years of experience in enterprise technology, security, or platform operations in large-scale environments. Experience with CI/CD pipelines, DevOps methodologies, and Infrastructure-as-Code (e.g., AWS CDK, Azure Bicep). Knowledge of ITIL, Agile delivery, and enterprise governance frameworks. Proficiency with big data technologies such as Apache Spark, Hadoop, Kafka, and Flink. Experience with cloud platforms (AWS, GCP, Azure) and cloud-native data solutions (BigQuery, Redshift, Snowflake, Databricks). Strong skills in SQL, Python, or Scala, and hands-on experience with data platform engineering. Understanding of data modeling, data warehousing, and distributed systems architecture. Essential Skills Technical experience in Microsoft Azure, AWS, Databricks, and Palantir. Understanding of data ingestion pipelines, governance, security, and data visualization. Experience supporting multi-cloud data platforms at scale—balancing cost, performance, and resilience. Familiarity with performance tuning, data indexing, and distributed query optimization. Exposure to both real-time and batch data streaming architectures Skills That Set You Apart Proven success navigating global, highly regulated environments, ensuring compliance, security, and enterprise-wide risk management. AI/ML-driven data engineering expertise, applying intelligent automation to optimize workflows. About Bp Our purpose is to deliver energy to the world, today and tomorrow. For over 100 years, bp has focused on discovering, developing, and producing oil and gas in the nations where we operate. We are one of the few companies globally that can provide governments and customers with an integrated energy offering. Delivering our strategy sustainably is fundamental to achieving our ambition to be a net zero company by 2050 or sooner! We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. Travel Requirement Up to 10% travel should be expected with this role Relocation Assistance: This role is eligible for relocation within country Remote Type: This position is a hybrid of office/remote working Skills: Agility core practices, Agility core practices, Analytics, API and platform design, Business Analysis, Cloud Platforms, Coaching, Communication, Configuration management and release, Continuous deployment and release, Data Structures and Algorithms (Inactive), Digital Project Management, Documentation and knowledge sharing, Facilitation, Information Security, iOS and Android development, Mentoring, Metrics definition and instrumentation, NoSql data modelling, Relational Data Modelling, Risk Management, Scripting, Service operations and resiliency, Software Design and Development, Source control and code management {+ 4 more} Legal Disclaimer: We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, socioeconomic status, neurodiversity/neurocognitive functioning, veteran status or disability status. Individuals with an accessibility need may request an adjustment/accommodation related to bp’s recruiting process (e.g., accessing the job application, completing required assessments, participating in telephone screenings or interviews, etc.). If you would like to request an adjustment/accommodation related to the recruitment process, please contact us. If you are selected for a position and depending upon your role, your employment may be contingent upon adherence to local policy. This may include pre-placement drug screening, medical review of physical fitness for the role, and background checks.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France