Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
4 - 8 Lacs
Hyderabad, New Delhi
Hybrid
Working Mode : Hybrid Payroll: IDESLABS Location : Pan India PF Detection is mandatory Job Description: Key Responsibilities: Work on AS400 with expertise in I5250 and Presto for inventory and order fulfillment. Utilize modern RPG techniques including RPG, RPGLE , and Embedded SQL . Design and develop applications using RPG, RPGLE, CL400, and CLLE . Perform database design and development on DB2/400 , ensuring efficient query execution and optimization. Develop and troubleshoot applications using AS400 tools like SDA, RLU, and PDM . Write and optimize complex SQL queries for embedded and interactive sessions. Perform end-to-end unit testing , debug interactive and batch jobs, and ensure seamless functionality of applications. Collaborate directly with client teams to gather business requirements and deliver appropriate solutions. Maintain and enhance existing applications, ensuring smooth operations and performance. Work with one or more change management tools such as Aldon, Turnover, or Thenon (preferred). Be adaptable to switching between tasks, handling both development and maintenance activities.
Posted 1 month ago
4.0 - 9.0 years
4 - 7 Lacs
Bengaluru
Work from Office
Key Responsibilities Work on AS400 with expertise in I5250 and Presto for inventory and order fulfillment. Utilize modern RPG techniques including RPG, RPGLE , and Embedded SQL . Design and develop applications using RPG, RPGLE, CL400, and CLLE . Perform database design and development on DB2/400 , ensuring efficient query execution and optimization. Develop and troubleshoot applications using AS400 tools like SDA, RLU, and PDM . Write and optimize complex SQL queries for embedded and interactive sessions. Perform end-to-end unit testing , debug interactive and batch jobs, and ensure seamless functionality of applications. Collaborate directly with client teams to gather business requirements and deliver appropriate solutions. Maintain and enhance existing applications, ensuring smooth operations and performance. Work with one or more change management tools such as Aldon, Turnover, or Thenon (preferred). Be adaptable to switching between tasks, handling both development and maintenance activities.
Posted 1 month ago
8.0 - 13.0 years
5 - 9 Lacs
Hyderabad
Hybrid
Immediate Openings on # AS400 Developer _ Pan India_ Contract #Skill:AS400 Developer #Notice Period:Immediate #Employment Type: Contract Work on AS400 with expertise in I5250 and Presto for inventory and order fulfillment. Utilize modern RPG techniques including RPG, RPGLE , and Embedded SQL . Design and develop applications using RPG, RPGLE, CL400, and CLLE . Perform database design and development on DB2/400 , ensuring efficient query execution and optimization. Develop and troubleshoot applications using AS400 tools like SDA, RLU, and PDM . Write and optimize complex SQL queries for embedded and interactive sessions. Perform end-to-end unit testing , debug interactive and batch jobs, and ensure seamless functionality of applications. Collaborate directly with client teams to gather business requirements and deliver appropriate solutions. Maintain and enhance existing applications, ensuring smooth operations and performance. Work with one or more change management tools such as Aldon, Turnover, or Thenon (preferred). Be adaptable to switching between tasks, handling both development and maintenance activities.
Posted 1 month ago
3.0 - 8.0 years
5 - 9 Lacs
Bengaluru
Work from Office
WHAT YOU WILL WORK ON Serve as a liaison between product, engineering & data consumers by analyzing the data, finding gaps and help drive roadmap Support and troubleshoot issues (data & process), identify root cause, and proactively recommend sustainable corrective actions by collaborating with engineering/product teams Communicate actionable insights using data, often for the stakeholders and non-technical audience. Ability to write technical specifications describing requirements for data movement, transformation & quality checks WHAT YOU BRING Bachelors Degree in Computer Science, MIS, other quantitative disciplines, or related fields 3-7 years of relevant analytical experiences that can translate into defining strategic vision into requirements and working with the best engineers, product managers, and data scientists Ability to conduct data analysis, develop and test hypothesis and deliver insights with minimal supervision Experience identifying and defining KPIs using data for business areas such as Sales, Consumer Behaviour, Supply Chain etc. Exceptional SQL skills Experience with modern visualization tool stack, such as: Tableau, Power BI, Domo etc. Knowledge of open-source, big data and cloud infrastructure such as AWS, Hive, Snowflake, Presto etc. Incredible attention to detail, with structured problem-solving approach Excellent communications skills (written and verbal) Experience with agile development methodologies Experience with retail or ecommerce domains is a plus.
Posted 1 month ago
6.0 - 11.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Experience in Cloud platform, e.g., AWS, GCP, Azure, etc. Experience in distributed technology tools, viz. SQL, Spark, Python, PySpark, Scala Performance Turing Optimize SQL, PySpark for performance Airflow workflow scheduling tool for creating data pipelines GitHub source control tool & experience with creating/ configuring Jenkins pipeline Experience in EMR/ EC2, Databricks etc. DWH tools incl. SQL database, Presto, and Snowflake Streaming, Serverless Architecture
Posted 1 month ago
3.0 - 5.0 years
5 - 7 Lacs
Bengaluru
Hybrid
Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Data Engineering, Big Data Technologies, Hadoop, Spark, Hive, Presto, Airflow, Data Modeling, Etl development, Data Lake Architecture, Python, Scala, GCP Big Query, Dataproc, Dataflow, Cloud Composer, AWS, Big Data Stack, Azure, GCP Wayfair is Looking for: About the job The Data Engineering team within the SMART org supports development of large-scale data pipelines for machine learning and analytical solutions related to unstructured and structured data. You'll have the opportunity to gain hands-on experience on all kinds of systems in the data platform ecosystem. Your work will have a direct impact on all applications that our millions of customers interact with every day: search results, homepage content, emails, auto-complete searches, browse pages and product carousels and build and scale data platforms that enable to measure the effectiveness of wayfairs ad-costs , media attribution that helps to decide on day to day or major marketing spends. About the Role: As a Data Engineer, you will be part of the Data Engineering team with this role being inherently multi-functional, and the ideal candidate will work with Data Scientist, Analysts, Application teams across the company, as well as all other Data Engineering squads at Wayfair. We are looking for someone with a love for data, understanding requirements clearly and the ability to iterate quickly. Successful candidates will have strong engineering skills and communication and a belief that data-driven processes lead to phenomenal products. What you'll do: Build and launch data pipelines, and data products focussed on SMART Org. Helping teams push the boundaries of insights, creating new product features using data, and powering machine learning models. Build cross-functional relationships to understand data needs, build key metrics and standardize their usage across the organization. Utilize current and leading edge technologies in software engineering, big data, streaming, and cloud infrastructure What You'll Need: Bachelor/Master degree in Computer Science or related technical subject area or equivalent combination of education and experience 3+ years relevant work experience in the Data Engineering field with web scale data sets. Demonstrated strength in data modeling, ETL development and data lake architecture. Data Warehousing Experience with Big Data Technologies (Hadoop, Spark, Hive, Presto, Airflow etc.). Coding proficiency in at least one modern programming language (Python, Scala, etc) Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing and query performance tuning skills of large data sets. Industry experience as a Big Data Engineer and working along cross functional teams such as Software Engineering, Analytics, Data Science with a track record of manipulating, processing, and extracting value from large datasets. Strong business acumen. Experience leading large-scale data warehousing and analytics projects, including using GCP technologies Big Query, Dataproc, GCS, Cloud Composer, Dataflow or related big data technologies in other cloud platforms like AWS, Azure etc. Be a team player and introduce/follow the best practices on the data engineering space. Ability to effectively communicate (both written and verbally) technical information and the results of engineering design at all levels of the organization. Good to have : Understanding of NoSQL Database exposure and Pub-Sub architecture setup. Familiarity with Bl tools like Looker, Tableau, AtScale, PowerBI, or any similar tools.
Posted 1 month ago
5.0 - 10.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Job description About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Senior Data Analyst Experience: 5+ Years Skill Set: Data Analysis, SQL and Cloud (AWS, Azure, GCP), Retail domain exposure-must Location: Pune, Hyderabad, Gurgaon Key Requirements: Bachelors degree in Computer Science, MIS, or related fields. 6-7 years of relevant analytical experience, translating strategic vision into actionable requirements. Ability to conduct data analysis, develop and test hypotheses, and deliver insights with minimal supervision. Experience identifying and defining KPIs for business areas such as Sales, Consumer Behavior, Supply Chain, etc. Exceptional SQL skills. Experience with modern visualization tools like Tableau, Power BI, Domo, etc. Knowledge of open-source, big data, and cloud infrastructure such as AWS, Hive, Snowflake, Presto, etc. Incredible attention to detail with a structured problem-solving approach. Excellent communication skills (written and verbal). Experience with agile development methodologies. Experience in retail or e-commerce domains is a plus. How to Apply: Interested candidates can share their CV at pragati.jha@gspann.com.
Posted 1 month ago
2.0 - 6.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Flexing It is a freelance consulting marketplace that connects freelancers and independent consultants with organisations seeking independent talent.. Flexing It has partnered with Our client, a global leader in energy management and automation, is seeking a Data engineer to prepare data and make it available in an efficient and optimized format for their different data consumers, ranging from BI and analytics to data science applications. It requires to work with current technologies in particular Apache Spark, Lambda & Step Functions, Glue Data Catalog, and RedShift on AWS environment.. Key Responsibilities:. Design and develop new data ingestion patterns into IntelDS Raw and/or Unified data layers based on the requirements and needs for connecting new data sources or for building new data objects. Working in ingestion patterns allow to automate the data pipelines.. Participate to and apply DevSecOps practices by automating the integration and delivery of data pipelines in a cloud environment. This can include the design and implementation of end-to-end data integration tests and/or CICD pipelines.. Analyze existing data models, identify and implement performance optimizations for data. ingestion and data consumption. The objective is to accelerate data availability within the. platform and to consumer applications.. Support client applications in connecting and consuming data from the platform, and ensure they follow our guidelines and best practices.. Participate in the monitoring of the platform and debugging of detected issues and bugs. Skills required:. Minimum of 3 years prior experience as data engineer with proven experience on Big Data and Data Lakes on a cloud environment.. Bachelor or Master degree in computer science or applied mathematics (or equivalent). Proven experience working with data pipelines / ETL / BI regardless of the technology.. Proven experience working with AWS including at least 3 of: RedShift, S3, EMR, Cloud. Formation, DynamoDB, RDS, lambda.. Big Data technologies and distributed systems: one of Spark, Presto or Hive.. Python language: scripting and object oriented.. Fluency in SQL for data warehousing (RedShift in particular is a plus).. Good understanding on data warehousing and Data modelling concepts. Familiar with GIT, Linux, CI/CD pipelines is a plus.. Strong systems/process orientation with demonstrated analytical thinking, organization. skills and problem-solving skills.. Ability to self-manage, prioritize and execute tasks in a demanding environment.. Strong consultancy orientation and experience, with the ability to form collaborative,. productive working relationships across diverse teams and cultures is a must.. Willingness and ability to train and teach others.. Ability to facilitate meetings and follow up with resulting action items. Show more Show less
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Digantara is a leading Space Surveillance and Intelligence company focused on ensuring orbital safety and sustainability. With expertise in space-based detection, tracking, identification, and monitoring, Digantara provides comprehensive domain awareness across various regimes, allowing end-users to gain actionable intelligence on a single platform. At the core of its infrastructure lies a sophisticated integration of hardware and software capabilities aligned with the key principles of situational awareness: perception (data collection), comprehension (data processing), and prediction (analytics). This holistic approach empowers Digantara to monitor all Resident Space Objects (RSOs) in orbit, fostering comprehensive domain awareness.. We are seeking a skilled Back End Developer to join our dynamic team. As a Back End Developer, you will be responsible for designing, developing, and maintaining the server-side logic of our web platform. Your primary focus will be on building an efficient event streaming platform that can handle massive data pipelines. You will work closely with the front-end developers, astrodynamics engineers, and other team members to ensure seamless integration between the multiple microservices.. Why Us?. Competitive incentives, blazing team, pretty much everything that you have heard about a startup, plus you get to work on space technology.. Hustle in a well-funded startup with a flat hierarchy that allows you to take charge of your responsibilities and create your moonshot.. Ideal Candidate. Someone experienced in building distributed event streaming platforms capable of handling massive data pipelines.. Responsibilities. Build the space situational awareness platform with clean, modular, and well-documented code that complies with best practices and coding standards.. Handle continuous streams of data, complex event processing, and asynchronous communication to build real-time data pipelines and event-driven architecture.. Manage databases and handle big data that power our web platform.. Troubleshoot and debug issues to ensure smooth functionality across different systems.. Stay up-to-date with the latest backend development trends, tools, and techniques.. Participate in code reviews, providing constructive feedback and suggestions for improvement.. Contribute to the continuous improvement of development processes and workflows.. Required Qualifications. 2 or more years of experience in designing APIs & Databases.. Solid understanding of event streaming platforms and messaging queues like Apache Kafka.. Proficiency in any server-side programming language and runtimes, preferably Golang.. Experience with relational databases like Postgres.. Knowledge of version control systems (e.g., Git) and collaborative development workflows.. Excellent problem-solving and debugging skills.. Ability to work effectively in a fast-paced, collaborative team environment.. Strong communication skills, with the ability to articulate technical concepts to non-technical stakeholders.. Preferred Qualities. Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.. Proficiency in Golang would be an added advantage.. Familiarity with the Hadoop ecosystem.. Being acquainted with Hive, HDFS, Presto, HBase, and Spark would be an added benefit.. Knowledge of cloud platforms and their performance optimisation techniques (e.g., AWS, Azure, GCP).. Having prior experience in a product or start-up company would further enhance your appeal to us.. Demonstrates a proactive attitude towards learning, being highly adaptable, and eager to acquire new knowledge and skills.. General Requirements. Ability to work in a mission-focused, operational environment.. Ability to think critically and make independent decisions.. Interpersonal skills to enable working in a diverse and dynamic team.. Maintain a regular and predictable work schedule.. Verbal and written communication skills as well as organisational skills.. Travel occasionally as necessary.. Job Location: Hebbal, Bengaluru, Karnataka, India. PI272704447. Show more Show less
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Chennai
Work from Office
Notice period: Immediate 15days Timings:1:00pm 10:00pm (IST) Work Mode: WFO (Mon-Fri) We are seeking a strategic and innovative Senior Data Scientist to join our high-performing Data Science team. In this role, you will lead the design, development, and deployment of advanced analytics and machine learning solutions that directly impact business outcomes. You will collaborate cross-functionally with product, engineering, and business teams to translate complex data into actionable insights and data products. Key Responsibilities Lead and execute end-to-end data science projects, encompassing problem definition, data exploration, model creation, assessment, and deployment. Develop and deploy predictive models, optimization techniques, and statistical analyses to address tangible business needs. Articulate complex findings through clear and persuasive storytelling for both technical experts and non-technical stakeholders. Spearhead experimentation methodologies, such as A/B testing, to enhance product features and overall business outcomes. Partner with data engineering teams to establish dependable and scalable data infrastructure and production-ready models. Guide and mentor junior data scientists, while also fostering team best practices and contributing to research endeavors. Required Qualifications & Skills: Masters or PhD in Computer Science, Statistics, Mathematics, or a related 5+ years of practical experience in data science, including deploying models to Expertise in Python and SQL; Solid background in ML frameworks such as scikit-learn, TensorFlow, PyTorch, and Competence in data visualization tools like Tableau, Power BI, matplotlib, and Comprehensive knowledge of statistics, machine learning principles, and experimental Experience with cloud platforms (AWS, GCP, or Azure) and Git for version Exposure to MLOps tools and methodologies (e.g., MLflow, Kubeflow, Docker, CI/CD). Familiarity with NLP, time series forecasting, or recommendation systems is a Knowledge of big data technologies (Spark, Hive, Presto) is desirable
Posted 1 month ago
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Remote
Hiring for US based Multinational Company (MNC) We are seeking a skilled and detail-oriented Data Engineer to join our team. In this role, you will design, build, and maintain scalable data pipelines and infrastructure to support business intelligence, analytics, and machine learning initiatives. You will work closely with data scientists, analysts, and software engineers to ensure that high-quality data is readily available and usable. Design and implement scalable, reliable, and efficient data pipelines for processing and transforming large volumes of structured and unstructured data. Build and maintain data architectures including databases, data warehouses, and data lakes. Collaborate with data analysts and scientists to support their data needs and ensure data integrity and consistency. Optimize data systems for performance, cost, and scalability. Implement data quality checks, validation, and monitoring processes. Develop ETL/ELT workflows using modern tools and platforms. Ensure data security and compliance with relevant data protection regulations. Monitor and troubleshoot production data systems and pipelines. Proven experience as a Data Engineer or in a similar role Strong proficiency in SQL and at least one programming language such as Python, Scala, or Java Experience with data pipeline tools such as Apache Airflow, Luigi, or similar Familiarity with modern data platforms and tools: Big Data: Hadoop, Spark Data Warehousing: Snowflake, Redshift, BigQuery, Azure Synapse Databases: PostgreSQL, MySQL, MongoDB Experience with cloud platforms (AWS, Azure, or GCP) Knowledge of data modeling, schema design, and ETL best practices Strong analytical and problem-solving skills
Posted 1 month ago
0.0 - 4.0 years
9 - 13 Lacs
Pune
Work from Office
Project description You will be working in a global team that manages and performs a global technical control. You'll be joining Assets Management team which is looking after asset management data foundation and operates a set of in-house developed tooling. As an IT engineer you'll play an important role in ensuring the development methodology is followed, and lead technical design discussions with the architects. Our culture centers around partnership with our businesses, transparency, accountability and empowerment, and passion for the future. Responsibilities Design, develop, and maintain scalable data solutions using Starburst. Collaborate with cross-functional teams to integrate Starburst with existing data sources and tools. Optimize query performance and ensure data security and compliance. Implement monitoring and alerting systems for data platform health. Stay updated with the latest developments in data engineering and analytics. Skills Must have Bachelor's degree or Masters in a related technical field; or equivalent related professional experience. Prior experience as a Software Engineer applying new engineering principles to improve existing systems including leading complex, well defined projects. Strong knowledge of Big-Data Languages including: SQL Hive Spark/Pyspark Presto Python Strong knowledge of Big-Data Platforms, such as:o The Apache Hadoop ecosystemo AWS EMRo Qubole or Trino/Starburst Good knowledge and experience in cloud platforms such as AWS, GCP, or Azure. Continuous learner with the ability to apply previous experience and knowledge to quickly master new technologies. Demonstrates the ability to select among technology available to implement and solve for need. Able to understand and design moderately complex systems. Understanding of testing and monitoring tools. Ability to test, debug, fix issues within established SLAs. Experience with data visualization tools (e.g., Tableau, Power BI). Understanding of data governance and compliance standards. Nice to have Data Architecture & EngineeringDesign and implement efficient and scalable data warehousing solutions using Azure Databricks and Microsoft Fabric. Business Intelligence & Data VisualizationCreate insightful Power BI dashboards to help drive business decisions. Other Languages EnglishC1 Advanced Seniority Senior
Posted 1 month ago
3.0 - 8.0 years
7 - 11 Lacs
Hyderabad
Work from Office
Overview Primary focus would be to lead development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities Delivery of key Enterprise Data Warehouse and Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications Experience Bachelors degree in Computer Science, MIS, Business Management, or related field 3+ years experience in Information Technology 1+ years experience in Azure Data Lake Technical Skills Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Teradata architecture and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo / Infoworks is desirable Knowledge of data warehousing concepts and data catalog tools (Alation)
Posted 1 month ago
5.0 - 10.0 years
19 - 25 Lacs
Hyderabad
Work from Office
Overview Primary focus would be to perform development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. This role will also have L3 responsibilities for ETL processes Responsibilities Delivery of key Azure Data Lake projects within time and budget Contribute to solution design and build to ensure scalability, performance and reuse of data and other components Delivery of key Azure Data Lake projects within time and budget Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Possess strong problem-solving abilities with a focus on managing to business outcomes through collaboration with multiple internal and external parties Enthusiastic, willing, able to learn and continuously develop skills and techniques enjoys change and seeks continuous improvement A clear communicator both written and verbal with good presentational skills, fluent and proficient in the English language Customer focused and a team player Qualifications Bachelors degree in Computer Science, MIS, Business Management, or related field 5+ years experience in Information Technology 4+ years experience in Azure Data Lake Bachelors degree in Computer Science, MIS, Business Management, or related field Technical Skills Proven experience development activities in Data, BI or Analytics projects Solutions Delivery experience - knowledge of system development lifecycle, integration, and sustainability Strong knowledge of Pyspark and SQL Good knowledge of Azure data factory or Databricks Knowledge of Presto / Denodo is desirable Knowledge of FMCG business processes is desirable Non-Technical Skills Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities Exceptional written and verbal communication skills along with collaboration and listening skills Ability to work with agile delivery methodologies Ability to ideate requirements & design iteratively with business partners without formal requirements documentation
Posted 1 month ago
3.0 - 6.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform.
Posted 1 month ago
3.0 - 6.0 years
10 - 15 Lacs
Gurugram, Bengaluru
Work from Office
3+ years of experience in data science roles, working with tabular data in large-scale projects. Experience in feature engineering and working with methods such as XGBoost, LightGBM, factorization machines , and similar algorithms. Experience in adtech or fintech industries is a plus. Familiarity with clickstream data, predictive modeling for user engagement, or bidding optimization is highly advantageous. MS or PhD in mathematics, computer science, physics, statistics, electrical engineering, or a related field. Proficiency in Python (3.9+), with experience in scientific computing and machine learning tools (e.g., NumPy, Pandas, SciPy, scikit-learn, matplotlib, etc.). Familiarity with deep learning frameworks (such as TensorFlow or PyTorch) is a plus. Strong expertise in applied statistical methods, A/B testing frameworks, advanced experiment design, and interpreting complex experimental results. Experience querying and processing data using SQL and working with distributed data storage solutions (e.g., AWS Redshift, Snowflake, BigQuery, Athena, Presto, MinIO, etc.). Experience in budget allocation optimization, lookalike modeling, LTV prediction, or churn analysis is a plus. Ability to manage multiple projects, prioritize tasks effectively, and maintain a structured approach to complex problem-solving. Excellent communication and collaboration skills to work effectively with both technical and business teams.
Posted 1 month ago
3.0 - 6.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Overview The individual will spend time building and maintaining our in-house planogram platform and leverage analytical and critical reasoning to solve complex, multidimensional problems using quantitative information and applying statistical and machine learning techniques. The C# .Net Developer will work with team members to develop the software that will implement our product assortment and placement onto PepsiCos planogram platform. Responsibilities Expand and maintain in-house planogram/reporting platform built using C# .NET framework Work with team lead on enhancing platform Optimize shelf assortment across multiple categories that satisfy days of supply, blocking and flow constraints Expand platform to new categories Apply machine learning techniques into assortment optimization and product placement Enhance and maintain platform UI Qualifications B.S./M.S. in quantitative discipline required (e.g. computer science, mathematics, operations research, engineering.) 7+ years of coding experience in C#, specifically using the .net framework 3+ years of coding experience in Angular or any equivalent js framework Strong skills in C# using ASP.NET and .net Core frameworks, LINQ and Entity Framework Experience with using project management tools such as DevOps Ability to support/develop windows and web platform simultaneously. High-level querying skills using SQL languages such asSQL or Presto. Knowledge using window functions, joins and sub-queries SQL SERVER or any RDBMS. Experience with Azure, .Net Core, Visual Studio, SQL Server
Posted 1 month ago
11.0 - 15.0 years
35 - 40 Lacs
Hyderabad
Work from Office
Overview Primary focus would be to lead development work within Azure Data Lake environment and other related ETL technologies, with the responsibility of ensuring on time and on budget delivery; Satisfying project requirements, while adhering to enterprise architecture standards. Role will lead key data lake projects and resources, including innovation related initiatives (e.g. adoption of technologies like Databricks, Presto, Denodo, Python,Azure data factory; database encryption; enabling rapid experimentation etc.)). This role will also have L3 and release management responsibilities for ETL processes Responsibilities Lead delivery of key Enterprise Data Warehouse and Azure Data Lake projects within time and budget Drive solution design and build to ensure scalability, performance and reuse of data and other components Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards. Manage work intake, prioritization and release timing; balancing demand and available resources. Ensure tactical initiatives are aligned with the strategic vision and business needs Oversee coordination and partnerships with Business Relationship Managers, Architecture and IT services teams to develop and maintain EDW and data lake best practices and standards along with appropriate quality assurance policies and procedures May lead a team of employee and contract resources to meet build requirements: o Set priorities for the team to ensure task completion o Coordinate work activities with other IT services and business teams. o Hold team accountable for milestone deliverables o Provide L3 support for existing applications o Release management Qualifications Bachelors degree in Computer Science, MIS, Business Management, or related field 11 + years experience in Information Technology or Business Relationship Management 7 + years experience in Data Warehouse/Azure Data Lake 3 years experience in Azure data lake 2 years experience in project management Technical Skills Thorough knowledge of data warehousing / data lake concepts Hands on experience on tools like Azure data factory, databricks, pyspark and other data management tools on Azure Proven experience in managing Data, BI or Analytics projects Solutions Delivery experience - expertise in system development lifecycle, integration, and sustainability Experience in data modeling or database experience; Non-Technical Skills Excellent remote collaboration skills Experience working in a matrix organization with diverse priorities Experience dealing with and managing multiple vendors Exceptional written and verbal communication skills along with collaboration and listening skills Ability to work with agile delivery methodologies Ability to ideate requirements & design iteratively with business partners without formal requirements documentation Ability to budget resources and funding to meet project deliverables
Posted 1 month ago
5.0 - 10.0 years
10 - 15 Lacs
Bengaluru
Work from Office
About us: As a Fortune 50 company with more than 400,000 team members worldwide, Target is an iconic brand and one of America's leading retailers. Joining Target means promoting a culture of mutual care and respect and striving to make the most meaningful and positive impact. Becoming a Target team member means joining a community that values different voices and lifts each other up. Here, we believe your unique perspective is important, and you'll build relationships by being authentic and respectful. Overview about Target in India At Target, we have a timeless purpose and a proven strategy. And that hasn t happened by accident. Some of the best minds from different backgrounds come together at Target to redefine retail in an inclusive learning environment that values people and delivers world-class outcomes. That winning formula is especially apparent in Bengaluru, where Target in India operates as a fully integrated part of Target s global team and has more than 4,000 team members supporting the company s global strategy and operations. About the Role As a Senior RBX Data Specialist at Target in India, involves the end-to-end management of data, encompassing building and maintaining pipelines through ETL/ELT and data modeling, ensuring data accuracy and system performance, and resolving data flow issues. It also requires analyzing data to generate insights, creating visualizations for stakeholders, automating processes for efficiency, and effective collaboration across both business and technical teams. You will also answer ad-hoc questions from your business users by conducting quick analysis on relevant data, identify trends and correlations, and form hypotheses to explain the observations. Some of this will lead to bigger projects of increased complexity, where you will have to work as a part of a bigger team, but also independently execute specific tasks. Finally, you are expected to always adhere to project schedule and technical rigor as well as requirements for documentation, code versioning, etc Key Responsibilities Data Pipeline and MaintenanceMonitor data pipelines and warehousing systems to ensure optimal health and performance. Ensure data integrity and accuracy throughout the data lifecycle. Incident Management and ResolutionDrive the resolution of data incidents and document their causes and fixes, collaborating with teams to prevent recurrence. Automation and Process ImprovementIdentify and implement automation opportunities and Data Ops best practices to enhance the efficiency, reliability, and scalability of data processes. Collaboration and CommunicationWork closely with data teams and stakeholders, to understand data pipeline architecture and dependencies, ensuring timely and accurate data delivery while effectively communicating data issues and participating in relevant discussions. Data Quality and GovernanceImplement and enforce data quality standards, monitor metrics for improvement, and support data governance by ensuring policy compliance. Documentation and ReportingCreate and maintain clear and concise documentation of data pipelines, processes, and troubleshooting steps. Develop and generate reports on data operations performance and key metrics. Core responsibilities are described within this job description. Job duties may change at any time due to business needs. About You B.Tech / B.E. or equivalent (completed) degree 5+ years of relevant work experience Experience in Marketing/Customer/Loyalty/Retail analytics is preferable Exposure to A/B testing Familiarity with big data technologies, data languages and visualization tools Exposure to languages such as Python and R for data analysis and modelling Proficiency in SQL for data extraction, manipulation, and analysis, with experience in big data query frameworks such as Hive, Presto, SQL, or BigQuery Solid foundation knowledge in mathematics, statistics, and predictive modelling techniques, including Linear Regression, Logistic Regression, time-series models, and classification techniques. Ability to simplify complex technical and analytical methodologies for easier comprehension for broad audiences. Ability to identify process and tool improvements and implement change Excellent written and verbal English communication skills for Global working Motivation to initiate, build and maintain global partnerships Ability to function in group and/or individual settings. Willing and able to work from our office location (Bangalore HQ) as required by business needs and brand initiatives Useful Links- Life at Target- https://india.target.com/ Benefits- https://india.target.com/life-at-target/workplace/benefits Culture- https://india.target.com/life-at-target/belonging
Posted 1 month ago
7.0 - 9.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2162_JOB Date Opened 15/03/2024 Industry Technology Job Type Work Experience 7-9 years Job Title Sr Data Engineer City Bangalore Province Karnataka Country India Postal Code 560004 Number of Positions 5 Mandatory Skills: Microsoft Azure, Hadoop, Spark, Databricks, Airflow, Kafka, Py spark RequirmentsExperience working with distributed technology tools for developing Batch and Streaming pipelines using. SQL, Spark, Python Airflow Scala Kafka Experience in Cloud Computing, e.g., AWS, GCP, Azure, etc. Able to quickly pick up new programming languages, technologies, and frameworks. Strong skills building positive relationships across Product and Engineering. Able to influence and communicate effectively, both verbally and written, with team members and business stakeholders Experience with creating/ configuring Jenkins pipeline for smooth CI/CD process for Managed Spark jobs, build Docker images, etc. Working knowledge of Data warehousing, Data modelling, Governance and Data Architecture Experience working with Data platforms, including EMR, Airflow, Data bricks (Data Engineering & Delta Lake components) Experience working in Agile and Scrum development process. Experience in EMR/ EC2, Data bricks etc. Experience working with Data warehousing tools, including SQL database, Presto, and Snowflake Experience architecting data product in Streaming, Server less and Microservices Architecture and platform. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
8.0 - 13.0 years
40 - 65 Lacs
Bengaluru
Work from Office
About the team When 5% of Indian households shop with us, its important to build resilient systems to manage millions of orders every day. We’ve done this – with zero downtime! Sounds impossible? Well, that’s the kind of Engineering muscle that has helped Meesho become the e-commerce giant that it is today. We value speed over perfection, and see failures as opportunities to become better. We’ve taken steps to inculcate a strong ‘Founder’s Mindset’ across our engineering teams, making us grow and move fast. We place special emphasis on the continuous growth of each team member - and we do this with regular 1-1s and open communication. As Engineering Manager, you will be part of self-starters who thrive on teamwork and constructive feedback. We know how to party as hard as we work! If we aren’t building unparalleled tech solutions, you can find us debating the plot points of our favourite books and games – or even gossipping over chai. So, if a day filled with building impactful solutions with a fun team sounds appealing to you, join us. About the role We are looking for a seasoned Engineering Manager well-versed with emerging technologies to join our team. As an Engineering Manager, you will ensure consistency and quality by shaping the right strategies. You will keep an eye on all engineering projects and ensure all duties are fulfilled. You will analyse other employees’ tasks and carry on collaborations effectively. You will also transform newbies into experts and build reports on the progress of all projects What you will do Design tasks for other engineers, keeping Meesho’s guidelines and standards in mind Keep a close look on various projects and monitor the progress Drive excellence in quality across the organisation and solutioning of product problems Collaborate with the sales and design teams to create new products Manage engineers and take ownership of the project while ensuring product scalability Conduct regular meetings to plan and develop reports on the progress of projects What you will need Bachelor's / Master’s in computer science At least 8+ years of professional experience At least 4+ years’ experience in managing software development teams Experience in building large-scale distributed Systems Experience in Scalable platforms Expertise in Java/Python/Go-Lang and multithreading Good understanding on Spark and internals Deep understanding of transactional and NoSQL DBs Deep understanding of Messaging systems – Kafka Good experience on cloud infrastructure - AWS preferably Ability to drive sprints and OKRs with good stakeholder management experience. Exceptional team managing skills Experience in managing a team of 4-5 junior engineers Good understanding on Streaming and real time pipelines Good understanding on Data modelling concepts, Data Quality tools Good knowledge in Business Intelligence tools Metabase, Superset, Tableau etc. Good to have knowledge - Trino, Flink, Presto, Druid, Pinot etc. Good to have knowledge - Data pipeline building
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
Skill required: Delivery - Marketing Analytics and Reporting Designation: I&F Decision Sci Practitioner Sr Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years What would you do? Data & AIAnalytical processes and technologies applied to marketing-related data to help businesses understand and deliver relevant experiences for their audiences, understand their competition, measure and optimize marketing campaigns, and optimize their return on investment. What are we looking for? Data Analytics - with a specialization in the marketing domain*Domain Specific skills* Familiarity with ad tech and B2B sales*Technical Skills* Proficiency in SQL and Python Experience in efficiently building, publishing & maintaining robust data models & warehouses for self-ser querying, advanced data science & ML analytic purposes Experience in conducting ETL / ELT with very large and complicated datasets and handling DAG data dependencies. Strong proficiency with SQL dialects on distributed or data lake style systems (Presto, BigQuery, Spark/Hive SQL, etc.), including SQL-based experience in nested data structure manipulation, windowing functions, query optimization, data partitioning techniques, etc. Knowledge of Google BigQuery optimization is a plus. Experience in schema design and data modeling strategies (e.g. dimensional modeling, data vault, etc.) Significant experience with dbt (or similar tools), Spark-based (or similar) data pipelines General knowledge of Jinja templating in Python. Hands-on experience with cloud provider integration and automation via CLIs and APIs*Soft Skills* Ability to work well in a team Agility for quick learning Written and verbal communication Roles and Responsibilities: In this role you are required to do analysis and solving of increasingly complex problems Your day-to-day interactions are with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instruction on new assignments Decisions that are made by you impact your own work and may impact the work of others In this role you would be an individual contributor and/or oversee a small work effort and/or team Please note that this role may require you to work in rotational shifts Qualifications Any Graduation
Posted 1 month ago
3.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
WHAT YOU WILL WORK ON Serve as a liaison between product, engineering & data consumers by analyzing the data, finding gaps and help drive roadmap Support and troubleshoot issues (data & process), identify root cause, and proactively recommend sustainable corrective actions by collaborating with engineering/product teams Communicate actionable insights using data, often for the stakeholders and non-technical audience. Ability to write technical specifications describing requirements for data movement, transformation & quality checks WHAT YOU BRING Bachelor s Degree in Computer Science, MIS, other quantitative disciplines, or related fields 3-7 years of relevant analytical experiences that can translate into defining strategic vision into requirements and working with the best engineers, product managers, and data scientists Ability to conduct data analysis, develop and test hypothesis and deliver insights with minimal supervision Experience identifying and defining KPI s using data for business areas such as Sales, Consumer Behaviour, Supply Chain etc. Exceptional SQL skills Experience with modern visualization tool stack, such as: Tableau, Power BI, Domo etc. Knowledge of open-source, big data and cloud infrastructure such as AWS, Hive, Snowflake, Presto etc. Incredible attention to detail, with structured problem-solving approach Excellent communications skills (written and verbal) Experience with agile development methodologies Experience with retail or ecommerce domains is a plus.
Posted 1 month ago
5.0 - 10.0 years
13 - 22 Lacs
Bengaluru
Work from Office
Job Opportunity: Senior Data Analyst Bangalore Location: Bangalore, India Company: GSPANN Technologies Apply: Send your resume to heena.ruchwani@gspann.com GSPANN is hiring a Senior Data Analyst with 57 years of experience to join our dynamic team in Bangalore! What Were Looking For: Education: Bachelor’s degree in Computer Science, MIS, or a related field Experience: 5–7 years in data analysis, with a strong ability to translate business strategy into actionable insight Advanced SQL expertise Proficiency in Tableau , Power BI , or Domo Experience with AWS , Hive , Snowflake , Presto Ability to define and track KPIs across domains like Sales, Consumer Behavior, and Supply Chain Strong problem-solving skills and attention to detail Excellent communication and collaboration abilities Experience working in Agile environments Retail or eCommerce domain experience is a plus If this sounds like the right fit for you, don’t wait— send your updated resume to heena.ruchwani@gspann.com today!
Posted 2 months ago
3.0 - 7.0 years
11 - 15 Lacs
Bengaluru
Work from Office
Your Role Experience in data engineering and end-to-end implementation of CDP projects. Proficient in SQL, CDP (TreasureData), Python/Dig-Dag, Presto/SQL, and data engineering. Hands-on experience with Treasure Data CDP implementation and management. Excellent SQL skills, including advanced query writing and optimization. Oversee the end-to-end maintenance and operation of the Treasure Data CDP. Familiarity with data integration, API operations, and audience segmentation. Your profile Experience in unifying data across multiple brands and regions, ensuring consistency and accuracy. Ability to create and manage data workflows in Treasure Data Collaborate with cross-functional teams to ensure successful data integration and usage. Troubleshoot and optimize data pipelines and processes for scalability and performance. Stay updated on the latest features and best practices in Treasure Data and related technologies.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough