Home
Jobs

21 Wrangler Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

0 - 0 Lacs

India

On-site

Male Candidates are having minimum 5 to 7 years of experience in car driving like Mercedes,BMW,Audi,xuv 700,Baleno,wrangler jeep,Ioniq 5 Electric, etc is eligible to apply for this position only. Those candidates are having above mentioned experience they can contact Hr in this no (6370702277) and also main their cv in this mail id (hr@pjinternationals.com). Job Type: Full-time Pay: ₹12,000.00 - ₹15,000.00 per month Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 18/06/2025

Posted 12 hours ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Company description Prad4x4™ a unit of Prad automotive an ISO 9001:2015 company is a renowned name in Hard Core off-road/expedition equipment, Prad4x4™ products are designed, tested and manufactured in one of the biggest industrial hubs in ASIA, and used and abused by hard core off-roaders nation-wide. The full line of Prad 4x4 products are designed to provide unmatched protection to the Off-road machine and its occupants, and look great while doing it. From CNC laser cutting, CNC bending and tube notching, to high pressure die-forming and precision brake-forming, Prad 4x4 Products are made to handle whatever the trail can throw at you. We also deal with 4x4 accessories such as Expedition roofracks, Bumpers, Bullbar, Rockslides, Underbody Protection, Tyre Carriers, Auxiliary Lamps, Rollcages, Hardtops and etc for Mahindra Thar, Isuzu Dmax Vcross, Mahindra Scorpio, Mahindra Bolero, Tata Safari, Tata Xenon, Scorpio Getaway, Jeep Wrangler, Force Gurkha, Maruti Suzuki Gypsy and other 4x4's and premium SUVs. Job description Bachelor’s degree in Mechanical Engineering (fresher). Execute the design, analysis, or evaluation of automotive components using sound mechanical engineering principles. Sound knowledge of using CAD tools for Conceptual Design and R&D Projects. Executes on aspects of mechanical design engineering – material selection criteria, failure modes, evaluation methods (hand calculations & finite element analysis) with some guidance. Work with the project team to execute projects on time. Create and update the database of design records for the future. Read and understand 2D drawings and 3D models. Good understanding of manufacturing processes, tools and methods are also required. An individual contributor with proven interpersonal skills and good communication. Adds value through innovation, process improvements and clear thinking. Understands & appreciates tasks & their relation to overall project scope. Job Types: Full-time, Fresher Pay: ₹120,000.00 - ₹144,000.00 per year Benefits: Provident Fund Schedule: Morning shift Education: Bachelor's (Required) Location: Bengaluru, Karnataka (Required)

Posted 18 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

What makes this role special Join a green-field Enterprise solutions project that spans cloud-infra, data pipelines, QA automation, BI dashboards and business process analysis. Spend your first year rotating through four pods, discovering where you shine, then lock into the stream you love (DevOps, Data Engineering, QA, BI, or Business Analysis). Work side-by-side with senior architects and PMs; demo every Friday; leave with production-grade experience most freshers wait years to gain. Rotation roadmap (three months each) DevOps Starter – write Terraform variables, tweak Helm values, add a GitHub Action that auto-lints PRs. Data Wrangler – build a NiFi flow (CSV → S3 Parquet), add an Airflow DAG, validate schemas with Great Expectations. QA Automation – write PyTest cases for the WhatsApp bot, create a k6 load script, plug Allure reports into CI. BI / Business Analysis – design a Superset dataset & dashboard, document KPIs, shadow the PM to craft a user story and UAT sheet. Day-to-day you will Pick tickets from your pod’s board and push clean pull-requests or dashboard changes. Pair with mentors, record lessons in the wiki, and improve run-books as you go. Demo your work (max 15 min) in our hybrid Friday huddle. Must-have spark Basic coding in Python or JavaScript and Git fundamentals (clone → branch → PR). Comfortable with SQL JOINs & GROUP BY and spreadsheets for quick analysis. Curious mindset, clear written English, happy to ask “why?” and own deadlines. Bonus points A hobby Docker or AWS free-tier project. A Telegram/WhatsApp bot or hackathon win you can show. Contributions to open-source or a college IoT demo. What success looks like Ship at least twelve merged PRs/dashboards in your first quarter. Automate one manual chore the seniors used to dread. By month twelve you can independently take a user story from definition → code or spec → test → demo. Growth path Junior ➜ Associate II ➜ Senior (lead a pod); pay and AWS certifications climb with you. How to apply Fork github.com/company/erpnext-starter, fix any “good-first-issue”, open a PR. Email your resume, PR link, and a 150-word story about the coolest thing you’ve built. Short-listed candidates get a 30-min Zoom chat (no riddles) and a 24-hr mini-task aligned to your preferred first rotation. We hire attitude over pedigree—show you learn fast, document clearly, and love building, and you’re in. Show more Show less

Posted 23 hours ago

Apply

8.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred.The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week.The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. A key aspect of the MDLZ Google cloud BigQuery platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes man '8+ years of overall industry experience and minimum of 8-10 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts. ETL or Data integration tool: Experience in Talend is highly desirable. Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query. Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc. Programming: Understanding of OOPs concepts and hands-on experience with Python/Java for programming and scripting. Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks. Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx Keep our data separated and secure across national boundaries through multiple data centers and Azure regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Rich experience in working with FMCG industry. Deep knowledge in manipulating, processing, and extracting value from datasets; + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred.The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week. A key aspect of the MDLZ DataHub Google BigQuery platform is handling the complexity of inbound data, which often does not follow a global design (e.g., variations in channel inventory, customer PoS, hierarchies, distribution, and promo plans). You will assist in ensuring the robust operation of pipelines that translate this varied inbound data into the standardized o9 global design. This also includes managing pipelines for different data drivers (> 6 months vs. 0-6 months), ensuring consistent input to o9. '6+ years of overall industry experience and minimum of 6-8 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.Data engineering Concepts: Experience in working with data lake, data warehouse, data mart and Implemented ETL/ELT and SCD concepts. ETL or Data integration tool: Experience in Talend is highly desirable. Analytics: Fluent with SQL, PL/SQL and have used analytics tools like Big Query for data analytics Cloud experience: Experienced in GCP services like cloud function, cloud run, data flow, data proc and big query. Data sources: Experience of working with structure data sources like SAP, BW, Flat Files, RDBMS etc. and semi structured data sources like PDF, JSON, XML etc. Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Data Processing: Experience in working with any of the Data Processing Platforms like Dataflow, Databricks. Orchestration: Experience in orchestrating/scheduling data pipelines using any of the tools like Airflow and Alteryx Keep our data separated and secure across national boundaries through multiple data centers and Azure regions. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Deep knowledge in manipulating, processing, and extracting value from datasets; Atleast 2 years of FMCG/CPG industry experience. + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Job Description Looking for a savvy Data Engineer to join team of Modeling / Architect experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.This role requires a flexible working schedule, including potential weekend support for critical operations, while maintaining a 40-hour work week. In this role, you will assist in maintaining the MDLZ DataHub Google BigQuery data pipelines and corresponding platforms (on-prem and cloud), working closely with global teams on DataOps initiatives. The D4GV platform spans across three key GCP instances: NALA, MEU, and AMEA, supporting the global rollout of o9 across all Mondelēz BUs over the next three years 5+ years of overall industry experience and minimum of 2-4 years of experience building and deploying large scale data processing pipelines in a production environment Focus on excellence: Has practical experience of Data-Driven Approaches, Is familiar with the application of Data Security strategy, Is familiar with well know data engineering tools and platforms Technical depth and breadth : Able to build and operate Data Pipelines, Build and operate Data Storage, Has worked on big data architecture within Distributed Systems. Is familiar with Infrastructure definition and automation in this context. Is aware of adjacent technologies to the ones they have worked on. Can speak to the alternative tech choices to that made on their projects. Implementation and automation of Internal data extraction from SAP BW / HANA Implementation and automation of External data extraction from openly available internet data sources via APIs Data cleaning, curation and enrichment by using Alteryx, SQL, Python, R, PySpark, SparkR Data ingestion and management in Hadoop / Hive Preparing consolidated DataMart for use by Data Scientists and managing SQL Databases Exposing data via Alteryx, SQL Database for consumption in Tableau Data documentation maintenance/update Collaboration and workflow using a version control system (e.g., Git Hub) Learning ability : Is self-reflective, Has a hunger to improve, Has a keen interest to drive their own learning. Applies theoretical knowledge to practice Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Flexible Working Hours: This role requires the flexibility to work non-traditional hours, including providing support during off-hours or weekends for critical data pipeline job runs, deployments, or incident response, while ensuring the total work commitment remains a 40-hour week. Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Skills And Experience Deep knowledge in manipulating, processing, and extracting value from datasets; support the day-to-day operations of these GCP-based data pipelines, ensuring data governance, reliability, and performance optimization. Hands-on experience with GCP data services such as Dataflow, BigQuery, Dataproc, Pub/Sub, and real-time streaming architectures is preferred. + 5 years of experience in data engineering, business intelligence, data science, or related field; Proficiency with Programming Languages: SQL, Python, R Spark, PySpark, SparkR, SQL for data processing; Strong project management skills and ability to plan and prioritize work in a fast-paced environment; Experience with: MS Azure Data Factory, MS Azure Data Lake Store, SQL Database, SAP BW/ ECC / HANA, Alteryx, Tableau; Ability to think creatively, highly-driven and self-motivated; Knowledge of SAP BW for HANA (Extractors, Transformations, Modeling aDSOs, Queries, OpenHubs) No Relocation support available Business Unit Summary Headquartered in Singapore, Mondelēz International’s Asia, Middle East and Africa (AMEA) region is comprised of six business units, has more than 21,000 employees and operates in more than 27 countries including Australia, China, Indonesia, Ghana, India, Japan, Malaysia, New Zealand, Nigeria, Philippines, Saudi Arabia, South Africa, Thailand, United Arab Emirates and Vietnam. Seventy-six nationalities work across a network of more than 35 manufacturing plants, three global research and development technical centers and in offices stretching from Auckland, New Zealand to Casablanca, Morocco. Mondelēz International in the AMEA region is the proud maker of global and local iconic brands such as Oreo and belVita biscuits, Kinh Do mooncakes, Cadbury, Cadbury Dairy Milk and Milka chocolate, Halls candy, Stride gum, Tang powdered beverage and Philadelphia cheese. We are also proud to be named a Top Employer in many of our markets. Mondelēz International is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation or preference, gender identity, national origin, disability status, protected veteran status, or any other characteristic protected by law. Job Type Regular Analytics & Modelling Analytics & Data Science Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Syniverse is the world’s most connected company. Whether we’re developing the technology that enables intelligent cars to safely react to traffic changes or freeing travelers to explore by keeping their devices online wherever they go, we believe in leading the world forward. Which is why we work with some of the world’s most recognized brands. Eight of the top 10 banks. Four of the top 5 global technology companies. Over 900 communications providers. And how we’re able to provide our incredible talent with an innovative culture and great benefits. Who We're Looking For The Sr Data Engineer is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems or building new solutions from ground up. This role will work with developers, architects, product managers and data analysts on data initiatives and ensure optimal data delivery with good performance and uptime metrics. Your behaviors align strongly with our values because ours do. Some Of What You'll Do Scope of the Role: Direct Reports: This is an individual contributor role with no direct reports Key Responsibilities Create, enhance, and maintain optimal data pipeline architecture and implementations. Analyze data sets to meet functional / non-functional business requirements. Identify, design, and implement data process: automating processes, optimizing data delivery, etc. Build infrastructure and tools to increase data ETL velocity. Work with data and analytics experts to implement and enhance analytic product features. Provide life cycle support the Operations team for existing products, services, and functionality assigned to the Data Engineering team. Experience, Education, And Certifications Bachelor’s degree in Computer Science, Statistics, Informatics or related field or equivalent work experience. 5+ years of Software Development experience, including 3+ years of experience in Data Engineer fields. Experience in building and optimizing big data pipelines, architectures, and data sets: Experience with big data tools: Hadoop, Spark, Kafka, etc. Experience with relational SQL databases, such as PostgreSQL, MySQL, etc. Experience with stream-processing systems: Flink, KSQL, Spark-Streaming, etc. Experience with programming languages, such as Java, Scala, Python, etc. Experience with cloud data engineering and development, such as AWS, etc. Additional Requirements Familiar with Agile software design processes and methodologies. Good analytic skills related to working with structured and unstructured datasets. Knowledge of message queuing, stream processing and scalable big data stores. Ownership/accountability for tasks/projects with on time and quality deliveries. Good verbal and written communication skills. Teamwork with independent design and development habits. Work with a sense of urgency and positive attitude. Why You Should Join Us Join us as we write a new chapter, guided by world-class leadership. Come be a part of an exciting and growing organization where we offer a competitive total compensation, flexible/remote work and with a leadership team committed to fostering an inclusive, collaborative, and transparent organizational culture. At Syniverse connectedness is at the core of our business. We believe diversity, equity, and inclusion among our employees is crucial to our success as a global company as we seek to recruit, develop, and retain the most talented people who want to help us connect the world. Know someone at Syniverse? Be sure to have them submit you as a referral prior to applying for this position. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

New Delhi, Delhi, India

On-site

Linkedin logo

Company Description The company is a multifaceted business conglomerate (Jaipuria Group). The group has started its retail vertical a few years back in the name of Jaipuria Brands Pvt Ltd (JBPL) and managing Franchise EBOs of Adidas, Lee & Wrangler, ToysRus, Dhruv Sehgal Clothing Pvt. and ECCO across India. Role Description This is a full-time, on-site role for a Retail Training Manager located in New Delhi. The Retail Training Manager will be responsible for developing and delivering training programs to retail staff, enhancing sales techniques, identifying training needs, conducting performance evaluations, and staying updated with industry trends. The role involves collaborating with department heads to ensure training programs align with business goals and improve overall employee performance and customer satisfaction. Qualifications Experience in creating and delivering training programs, including sales techniques and customer service training Relevant experience in the retail industry is a plus Bachelor's degree in Education, Human Resources, Business Management, or related field Diploma in Footwear Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

Job Description EMPLOYMENT TYPE: Full-Time, Permanent LOCATION: Remote (Pan India) SHIFT TIMINGS: 2.00 pm-11:00 pm IST Budget- As per company standards REPORTING: This position will report to our CEO or any other Lead as assigned by Management. The Senior Data Engineer will be responsible for building and extending our data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys working with big data and building systems from the ground up. You will collaborate with our software engineers, database architects, data analysts, and data scientists to ensure our data delivery architecture is consistent throughout the platform. You must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. What You’ll Be Doing: ● Design and build parts of our data pipeline architecture for extraction, transformation, and loading of data from a wide variety of data sources using the latest Big Data technologies. ● Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. ● Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. ● Work with machine learning, data, and analytics experts to drive innovation, accuracy and greater functionality in our data system. Qualifications: ● Bachelor's degree in Engineering, Computer Science, or relevant field. ● 10+ years of relevant and recent experience in a Data Engineer role. ● 5+ years recent experience with Apache Spark and solid understanding of the fundamentals. ● Deep understanding of Big Data concepts and distributed systems. ● Strong coding skills with Scala, Python, Java and/or other languages and the ability to quickly switch between them with ease. ● Advanced working SQL knowledge and experience working with a variety of relational databases such as Postgres and/or MySQL. ● Cloud Experience with DataBricks ● Experience working with data stored in many formats including Delta Tables, Parquet, CSV and JSON. ● Comfortable working in a linux shell environment and writing scripts as needed. ● Comfortable working in an Agile environment ● Machine Learning knowledge is a plus. ● Must be capable of working independently and delivering stable, efficient and reliable software. ● Excellent written and verbal communication skills in English. ● Experience supporting and working with cross-functional teams in a dynamic environment. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Who We Are Kontoor Brands, Inc. (KTB) is the parent company of Wrangler®, Lee® and Rock & Republic®, with owned manufacturing facilities in Mexico and Nicaragua. Kontoor also owns and operates over 140 retail stores across the globe. Our global company employs more than 15,000 people in 65 countries, with world headquarters in Greensboro, North Carolina, and regional headquarters in Geneva and Hong Kong. Job Posting Position will be based on India, remote Definition and configuration of key processes that drive Power BI & EDW solutions, ETL integration, data models and processes. Map capabilities of IT EDW and Power BI solutions and systems to meet business requirements. Evaluate against reporting capabilities across platforms to determine the appropriate solution (Snowflake EDW, Power BI, SAP or non-SAP solutions) For the EDW Implementation and long-term support of on-going projects and enhancements: Accountable for future IT functional and technical design, configuration and integration of the Enterprise Data Warehouse Solutions Snowflake Data Warehouse Matillion ETL and Transformation Toolset Fivetran/HVR data replication Toolset Execute workplan activities to implement EDW activities in line with EDW and Power BI solution changes Definition and documentation of processes, WRICEF, Functional Specifications, test plans/scripts, deployment plans and legacy appl. changes Design, deliver, implement, configure/develop and support data and analytics solution, reports, dashboards, etc. The position will provide ongoing support to new EDW business processes and system enhancements. Support Reporting and Analytics Processes and Configuration, Data Migration & ETL Approach, data analysis, data cleansing, data conversion design, development, Cutover, Integration Testing, maintenance and support for the delivery of multiple global EDW deployments. Working closely with Global BI Reporting team, SAP/Non-SAP Functional Leads / Analysts to understand the data functional requirements that drive the conversion design that are aligned with business and IT strategies and are in compliance with the organization’s architectural standards. Participate in key process workshops & issue resolution sessions. Contribute to the future EDW solution design, testing, deployment and change management. Provide updates to reviews of program deliverables and status. Support BI Reporting Leadership to identify and plan IT driven growth opportunities, and develop plans for executing supporting initiatives, which may include sustaining, stretch, and breakthrough innovation. Ensure consistent role definition and compliance of security in data privacy, PII, access restrictions, security audit, etc Partner with IT and BI leads and Business Leads to ensure design meets requirements and gaps and/or issues are identified / resolved Manage own work and support other team members to achieve budget and schedule Support issue and weekly progress reporting. Support identification and control of areas of risk, to drive resolution of issues. Work Experience Required Enterprise Data Warehouse Configuration Development of Data Models and Data Warehouse solution (MS SQL, Snowflake preferred) Business Reporting Requirements gathering Development of ETL Solutions (Matillion, Fivetran HVR preferred) Development of external partner application integration via EDI, SFTP file transfers Development of Dashboards, Analytics and Reporting (Power BI preferred) Integration Testing Security Design related to reporting and analytics functionality Integration to external partners and other internal systems including eCommerce, EDI, WMS Solutions and B2B Integration Cutover planning and execution Also responsible for the integrity of Master Data across the EDW solution and the source information systems meeting established IT goals. Including integrity across applications in External Partners, Supply Chain, Retail, ecommerce, Sales, Sourcing, product lifecycle and Finance. Multiple BI and/or EDW lifecycle implementations preferred – Blueprint/Blueprint, Build/Test, Go-Live and Support Education And/or Certification Requirements Bachelor’s degree in a technical discipline, computer science, or other relevant discipline required. Certified in Power BI Reporting and Analytics solutions Certified in reporting and analytics solutions (Snowflake or SAP preferred) configuration (preferred) Why Kontoor Brands? At Kontoor, we offer a comprehensive benefit package to fit your lifestyle. Our benefits are crafted with the same care as our products. When our employees are healthy, secure, and well, they bring their best selves to work. Kontoor Brands supports you with a competitive benefits program that provides choice and flexibility to meet your and your family’s needs – now and in the future. We offer resources to support your physical, emotional, social, and financial wellbeing, plus benefits like discounts on our apparel. Kontoor Brands also provides four weeks of Paid Parental Leave to eligible employees who are new parents, Flexible Fridays, and Tuition Reimbursement. We are proud to offer a workplace culture centered on equitable opportunities and a sense of belonging for all team members. Here we have a global workforce of high-performing teams that both unlocks our individual uniqueness and harnesses our collaborative talents. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About the role We are looking for a savvy Data Engineer to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture; as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline design and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will lead our software developers; database architects; data analysts; and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams; systems; and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives. You will be responsible for Responsibilities - Create and maintain optimal data pipeline architecture; - Assemble large; complex data sets that meet functional / non-functional business requirements. - Identify; design; and implement internal process improvements: automating manual processes; optimizing data delivery; re-designing infrastructure for greater scalability; etc. - Build the infrastructure required for optimal extraction; transformation; and loading of data from a wide variety of data sources - Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition; operational efficiency; and other key business performance metrics. - Work with stakeholders including the Executive; Product; Data; and Design teams to assist with data-related technical issues and support their data infrastructure needs. - Keep our data separated and secure - Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader. - Work with data and analytics experts to strive for greater functionality in our data systems. You will need Mandatory skills: Hadoop; hive; Spark; any stream processing; Scala/Java; Kafka; and containerization/Kubernetes Good to have skills are Functional programming; Kafka Connect; Spark streaming; Helm Charts; hands-on experience in Kubernetes Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, d ifferentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. * Your fixed pay is the guaranteed pay as per your contract of employment. * Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. * In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. * Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. * We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. * Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. * Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. * Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco Bengaluru We are a multi-disciplinary team creating a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility, providing cutting-edge technological solutions and empowering our colleagues to do ever more for our customers. With cross-functional expertise in Global Business Services and Retail Technology & Engineering, a wide network of teams and strong governance we reduce complexity thereby offering high quality services for our customers. Tesco Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 4,40,000 colleagues. At Tesco Business Solutions, we have a mission to simplify, scale & partner to serve our customers, colleagues and suppliers through a best-in-class intelligent Business Services model. We do this by building a world class business services model by executing services model framework right at the heart of everything we do for our worldwide customers. The key objective is to implement and execute service model across all our functions and markets consistently. The ethos of business services is to free-up our colleagues from a regular manual operational work. We use cognitive technology to augment our key decision making. We also built a Continuous Improvement (CI) culture across functions to drive bottom-up business efficiencies by optimising processes. Business services colleagues need to act as a business partner with our group stakeholders to build a collaborative partnership driving continuous improvement across markets and functions to lead the best customer experience by serving our shoppers a little better every day. At Tesco, inclusion means that Everyone's Welcome. Everyone is treated fairly and with respect; by valuing individuality and uniqueness we create a sense of belonging. Diversity and inclusion have always been at the heart of Tesco. It is embedded in our values: we treat people how they want to be treated. We always want our colleagues to feel they can be themselves at work and we are committed to helping them be at their best. Across the Tesco group we are building an inclusive workplace, a place to actively celebrate the cultures, personalities and preferences of our colleagues, who in turn help to build the success of our business and reflect the diversity of the communities we serve. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Essential Responsibilities, Accountabilities & Results Collaborate with Pipeline leadership to set, and adhere to, team standards. Monitoring and queue management to effectively utilize resources, diagnose issues, and maximize throughput. Support and troubleshooting to identify root causes of errors, and to provide feedback to various teams to ensure that issues are addressed going forward. Validation of jobs to ensure completion, that there are no incomplete images or artifacts in the output. Optimization of submissions to maintain quality and efficiency while also keeping costs down. Communication with colleagues, artists, and stakeholders regarding any issues discovered/diagnosed so that expectations are managed accordingly. Telemetry and reporting within Deadline to provide artists and stakeholders with relevant information on farm utilization, costs, etc. Write and maintain clear documentation. Other duties as assigned. Experience Minimum Requirements Bachelor’s degree in Computer Science, Computer Graphics, Game Development, or equivalent education and/or experience may be considered. Experience with AWS Thinkbox Deadline render management toolkit a must. Familiarity with other render management tools a plus. Familiarity with CGI production tools such as Maya, Unreal Engine, Houdini, Nuke a must. Understanding CGI processes and production workflow. Experience with digital asset management systems like Flow or ftrack are a plus. Knowledge of Python and other development languages a plus. Strong organizational skills to manage multiple tasks, deadlines, resources effectively, and able to prioritize tasks, create schedules and keep track of various project elements. Excellent communication skills, to effectively liaise with stakeholders. Problem solving skills to identify issues, analyze causes and develop effective solutions. Resourceful, adaptable and able to think critically under pressure. Experience working with networking, externally facing applications, and infrastructure. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Who We Are Kontoor Brands, Inc. (KTB) is the parent company of Wrangler®, Lee® and Rock & Republic®, with owned manufacturing facilities in Mexico and Nicaragua. Kontoor also owns and operates over 140 retail stores across the globe. Our global company employs more than 15,000 people in 65 countries, with world headquarters in Greensboro, North Carolina, and regional headquarters in Geneva and Hong Kong. Job Posting Position will be based on India, remote Definition and configuration of key processes that drive Power BI & EDW solutions, ETL integration, data models and processes. Map capabilities of IT EDW and Power BI solutions and systems to meet business requirements. Evaluate against reporting capabilities across platforms to determine the appropriate solution (Snowflake EDW, Power BI, SAP or non-SAP solutions) For the EDW Implementation and long-term support of on-going projects and enhancements: Accountable for future IT functional and technical design, configuration and integration of the Enterprise Data Warehouse Solutions Snowflake Data Warehouse Matillion ETL and Transformation Toolset Fivetran/HVR data replication Toolset Execute workplan activities to implement EDW activities in line with EDW and Power BI solution changes Definition and documentation of processes, WRICEF, Functional Specifications, test plans/scripts, deployment plans and legacy appl. changes Design, deliver, implement, configure/develop and support data and analytics solution, reports, dashboards, etc. The position will provide ongoing support to new EDW business processes and system enhancements. Support Reporting and Analytics Processes and Configuration, Data Migration & ETL Approach, data analysis, data cleansing, data conversion design, development, Cutover, Integration Testing, maintenance and support for the delivery of multiple global EDW deployments. Working closely with Global BI Reporting team, SAP/Non-SAP Functional Leads / Analysts to understand the data functional requirements that drive the conversion design that are aligned with business and IT strategies and are in compliance with the organization’s architectural standards. Participate in key process workshops & issue resolution sessions. Contribute to the future EDW solution design, testing, deployment and change management. Provide updates to reviews of program deliverables and status. Support BI Reporting Leadership to identify and plan IT driven growth opportunities, and develop plans for executing supporting initiatives, which may include sustaining, stretch, and breakthrough innovation. Ensure consistent role definition and compliance of security in data privacy, PII, access restrictions, security audit, etc Partner with IT and BI leads and Business Leads to ensure design meets requirements and gaps and/or issues are identified / resolved Manage own work and support other team members to achieve budget and schedule Support issue and weekly progress reporting. Support identification and control of areas of risk, to drive resolution of issues. Work Experience Required Enterprise Data Warehouse Configuration Development of Data Models and Data Warehouse solution (MS SQL, Snowflake preferred) Business Reporting Requirements gathering Development of ETL Solutions (Matillion, Fivetran HVR preferred) Development of external partner application integration via EDI, SFTP file transfers Development of Dashboards, Analytics and Reporting (Power BI preferred) Integration Testing Security Design related to reporting and analytics functionality Integration to external partners and other internal systems including eCommerce, EDI, WMS Solutions and B2B Integration Cutover planning and execution Also responsible for the integrity of Master Data across the EDW solution and the source information systems meeting established IT goals. Including integrity across applications in External Partners, Supply Chain, Retail, ecommerce, Sales, Sourcing, product lifecycle and Finance. Multiple BI and/or EDW lifecycle implementations preferred – Blueprint/Blueprint, Build/Test, Go-Live and Support Education And/or Certification Requirements Bachelor’s degree in a technical discipline, computer science, or other relevant discipline required. Certified in Power BI Reporting and Analytics solutions Certified in reporting and analytics solutions (Snowflake or SAP preferred) configuration (preferred) Why Kontoor Brands? At Kontoor, we offer a comprehensive benefit package to fit your lifestyle. Our benefits are crafted with the same care as our products. When our employees are healthy, secure, and well, they bring their best selves to work. Kontoor Brands supports you with a competitive benefits program that provides choice and flexibility to meet your and your family’s needs – now and in the future. We offer resources to support your physical, emotional, social, and financial wellbeing, plus benefits like discounts on our apparel. Kontoor Brands also provides four weeks of Paid Parental Leave to eligible employees who are new parents, Flexible Fridays, and Tuition Reimbursement. We are proud to offer a workplace culture centered on equitable opportunities and a sense of belonging for all team members. Here we have a global workforce of high-performing teams that both unlocks our individual uniqueness and harnesses our collaborative talents. Show more Show less

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Company: Qualcomm India Private Limited Job Area: Information Technology Group, Information Technology Group > IT Data Engineer General Summary: We are looking for a savvy Data Engineer expert to join our analytics team. The Candidate will be responsible for expanding and optimizing our data and data pipelines, as well as optimizing data flow and collection for cross functional teams. The ideal candidate has python development experience and is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. We believe that candidate with solid Software Engineering/Development is a great fit. However, we also recognize that each candidate has a unique blend of skills. The Data Engineer will work with database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams.. The right candidate will be excited by the prospect of optimizing data to support our next generation of products and data initiatives. Responsibilities for Data Engineer Create and maintain optimal data pipelines, Assemble large, complex data sets that meet functional / non-functional business requirements. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing for greater scalability, etc. Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Work with data and analytics experts to strive for greater functionality in our data systems. Performing ad hoc analysis and report QA testing. Follow Agile/SCRUM development methodologies within Analytics projects. Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ‘big data’ data pipelines, and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Good communication skills, a great team player and someone who has the hunger to learn newer ways of problem solving. Build processes supporting data transformation, data structures, metadata, dependency, and workload management. A successful history of manipulating, processing, and extracting value from large, disconnected datasets. Working knowledge on Unix or Shell scripting Constructing methods to test user acceptance and usage of data. Knowledge of predictive analytics tools and problem solving using statistical methods is a plus. Experience supporting and working with cross-functional teams in a dynamic environment. Demonstrated understanding of the Software Development Life Cycle Ability to work independently and with a team in a diverse, fast paced, and collaborative environment Excellent written and verbal communication skills A quick learner with the ability to handle development tasks with minimum or no supervision Ability to multitask We are looking for a candidate with 7+ years of experience in a Data Engineering role. They should also have experience using the following software/tools: Experience in Python, Java, etc. Experience with Google Cloud Platform. Experience with bigdata frameworks & tools - Apache Hadoop/Beam/Spark/Kafka. Exposure to workflow management & scheduling using Airflow/Prefect/Dagster Exposure to databases like (Big Query , Clickhouse). Experience to container orchestration (Kubernetes) Optional Experience on one or more BI tools (Tableau, Splunk or equivalent). Minimum Qualifications: 6+ years of IT-related work experience without a Bachelor’s degree. 2+ years of work experience with programming (e.g., Java, Python). 1+ year of work experience with SQL or NoSQL Databases. 1+ year of work experience with Data Structures and algorithms. 'Bachelor's degree and 7+ years Data Engineer/ Software Engineer (Data) Experience Minimum Qualifications: 4+ years of IT-related work experience with a Bachelor's degree in Computer Engineering, Computer Science, Information Systems or a related field. OR 6+ years of IT-related work experience without a Bachelor’s degree. 2+ years of work experience with programming (e.g., Java, Python). 1+ year of work experience with SQL or NoSQL Databases. 1+ year of work experience with Data Structures and algorithms. Bachelors / Masters or equivalent degree in computer engineering or in equivalent stream Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3075135 Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Essential Responsibilities, Accountabilities & Results Collaborate with Pipeline leadership to set, and adhere to, team standards. Monitoring and queue management to effectively utilize resources, diagnose issues, and maximize throughput. Support and troubleshooting to identify root causes of errors, and to provide feedback to various teams to ensure that issues are addressed going forward. Validation of jobs to ensure completion, that there are no incomplete images or artifacts in the output. Optimization of submissions to maintain quality and efficiency while also keeping costs down. Communication with colleagues, artists, and stakeholders regarding any issues discovered/diagnosed so that expectations are managed accordingly. Telemetry and reporting within Deadline to provide artists and stakeholders with relevant information on farm utilization, costs, etc. Write and maintain clear documentation. Other duties as assigned. Experience Minimum Requirements Bachelor’s degree in Computer Science, Computer Graphics, Game Development, or equivalent education and/or experience may be considered. Experience with AWS Thinkbox Deadline render management toolkit a must. Familiarity with other render management tools a plus. Familiarity with CGI production tools such as Maya, Unreal Engine, Houdini, Nuke a must. Understanding CGI processes and production workflow. Experience with digital asset management systems like Flow or ftrack are a plus. Knowledge of Python and other development languages a plus. Strong organizational skills to manage multiple tasks, deadlines, resources effectively, and able to prioritize tasks, create schedules and keep track of various project elements. Excellent communication skills, to effectively liaise with stakeholders. Problem solving skills to identify issues, analyze causes and develop effective solutions. Resourceful, adaptable and able to think critically under pressure. Experience working with networking, externally facing applications, and infrastructure. Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We’re Hiring: Chief of Staff / EA to the Founder (Podcast Pro Edition) **This is a Full Time from Office Role** Location - Andheri West Are you a podcast whiz who can juggle mics, guests, and calendars—all before your morning coffee? We want you! What You’ll Need: Podcast Guru: You know the ins and outs of podcast production—recording, editing, publishing, and making every episode binge-worthy. Tech-Savvy: Audio editing tools, hosting platforms, and show notes? You’re already on it. Guest Wrangler: Book, brief, and charm guests like a pro (even the tricky ones). Trend Spotter: You’re up-to-date on what’s hot in podcasting and can pitch killer episode ideas. Organizational Ninja: Manage the founder’s calendar, keep projects on track, and handle the behind-the-scenes magic. Communication Ace: Stellar at emails, DMs, and handling all the moving parts—no detail too small, no deadline missed. If you’re ready to run the show (literally) and keep our founder’s world spinning, slide into our inbox! Email your resume - karishma@prachar.in - Subject Line : Podcast Maverick Show more Show less

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Summary Join our team at data42! The vision of data42 is to inspire collaborative groundbreaking data and AI initiatives, to reimagine drug discovery at Novartis and accelerate time to market, ultimately transforming healthcare and improving lives. We are a small team dedicated to bringing this vision to life by accelerating secondary research and data-driven decisions where we provide scientific support and data ready for analysis in a collaborative environment, leveraging a secure and governed AI-enabled platform. As Data Engineer, you will work alongside data scientists and domain experts to enable teams to answer scientific questions using multi-modal data on the data42 platform. You will be involved in gathering use-case requirements, performing data engineering activities, building ETL processes/data pipelines in quick iterations to deliver data ready for analysis. You will integrate data engineering best practices and data quality checks and seek to continuously optimize efficiency. About The Role Your responsibilities will include, but are not limited to: Collaborates with domain experts, data scientists and other stakeholders to fulfil use-case specific data needs. Designs, develops, tests, and maintains ETL processes/data pipelines to extract, prepare and iterate data for analysis in close alignment with TA / DA scientific leads and data scientists. Implements and maintains data checks to ensure accurate and high quality-data in close collaboration with domain experts. Identifies and rectifies data inconsistencies and irregularities. Promotes culture of transparency and communication regarding data modifications and lineage to all stakeholders. Implements and advocates for data engineering best practices, ensuring ETL processes/data pipelines are efficient, well-documented and well-tested. Plays a role in knowledge sharing across data42 and wider data engineering community at Novartis. Ensures compliance with Security and Governance Principles. Minimum Requirements Bachelor’s degree in computer science or other quantitative field (Mathematics, Statistics, Physics, Engineering, etc.) or equivalent practical experience. Proven experience as a data engineer, data wrangler or a similar role. Exceptional programming skills with expertise in Python, R and Spark. Experience and familiarity with a variety of data types, including but not limited to images, tabular, unstructured, and text. Experience in scalable data processing engines, data ingestion, extraction and modeling. Proficient knowledge in statistics, with an ability to assess data quality, errors, inconsistencies, etc. Good knowledge of data engineering best practices Excellent communication and stakeholder management skills. Demonstrated ability to work independently and as part of global Agile teams. Desirable additional skills in two or more of the following areas: Hands on experience on Palantir Foundry (Code Repository, Code Workbook, Contour, Data Lineage, etc.) Knowledge of CDISC data standards (SDTM, ADaM) Experience using AI (eg: GenAI/LLMs) for data wrangling. Experience with pooling of clinical trial data. High-level understanding of the drug discovery and development process. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Show more Show less

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Purpose We are looking for a Senior SQL Developer to join our growing team of BI & analytics experts. The hire will be responsible for expanding and optimizing our data and data queries, as well as optimizing data flow and collection for consumption by our BI & Analytics platform. The ideal candidate is an experienced data querying builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The SQL Developer will support our software developers, database architects, data analysts and data scientists on data and product initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. The hire must be self-directed and comfortable supporting the data needs of multiple systems and products. The right candidate will be excited by the prospect of optimizing our company’s data architecture to support our next generation of products and data initiatives. Job Responsibilities Essential Functions: Requirements Create and maintain optimal SQL queries, Views, Tables, Stored Procedures. Work together with various business units (BI, Product, Reporting) to develop data warehouse platform vision, strategy, and roadmap. Understand the development of physical and logical data models. Ensure high-performance access to diverse data sources. Encourage the adoption of an organization’s frameworks by providing documentation, sample code, and developer support. Communicate progress on the adoption and effectiveness of the developed frameworks to department head and managers. Required Education And Experience Bachelor’s or Master’s degree or equivalent combination of education and experience in relevant field. Understanding of T-SQL, Data Warehouses, Star Schema, Data Modeling, OLAP, SQL and ETL Experiencing in Creating Table, Views, Stored Procedures. Understanding of several BI and Reporting Platforms, and be aware of industry trends and direction in BI/reporting and applicability to the organization’s product strategies. Skilled in multiple database platforms, including SQL Server and MySQL. Knowledgeable of Source Control and Project Management tools like Azure DevOps, Git, and JIRA Familiarity of using SonarQube for clean coding T-SQL practices. Familiarity with DevOps best practices and automation of documentation, testing, build, deployment, configuration, and monitoring Communication skills: It is vital that applicants have exceptional written and spoken communication skills with active listening abilities to contribute in making strategic decisions and advise senior management on specialized technical issues, which will have an impact on the business Strong team building skills: it is crucial that they also have team building ability to provide direction for complex projects, mentor junior team members, and communicate the organization’s preferred technologies and frameworks across development teams. Experience: A candidate for this position must have had at least 5+ years working in a data warehousing position within a fast-paced and complex business environment, working as a SQL Developer. The candidate must also have had experience developing schema data models in a data warehouse environment. The candidate will also have had experience with full implementation of system development lifecycle (SDLC). The candidate must also have a proven and successful experience working with concepts of data integration, consolidation, enrichment, and aggregation. A suitable candidate will also have a strong demonstrated understanding of dimensional modeling and similar data warehousing techniques as well as having experience working with relational or multi-dimensional databases and business intelligence architectures. Analytical Skills: As expected, a candidate for the position will have passion as well as skill in research and analytics as well as a passion for data management tools and technologies. The candidate must have an ability to perform detailed data analysis, for example, in determining the content, structure, and quality of data through the examination of data samples and source systems. The hire will additionally have the ability to troubleshoot data warehousing issues and quickly resolve them. Expected Competencies Detailed oriented with strong organizational skills Ability to pay attention to programming style and neatness Strong English communication skills, both written and verbal Ability to train, mentor junior colleagues with patience with tangible results Work Timings This is a full-time position. Days and hours of work are Monday through Friday, and should be flexible to support different time zones ranging between 12 PM IST to 9PM IST, Work schedule may include evening hours or weekends due to client needs per manager instructions This role will be working in Hybrid Mode and will require at least 2 days’ work from office at Hyderabad. Occasional evening and weekend work may be expected in case of job-related emergencies or client needs. EEO Statement Cendyn provides equal employment opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Cendyn complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training. Cendyn expressly prohibits any form of workplace harassment based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, or veteran status. Improper interference with the ability of Cendyn’s employees to perform their job duties may result in discipline up to and including discharge. Other Duties Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Show more Show less

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Description Position at DNEG Job Title: Render Wrangler Location: Mumbai Must Have 3+ years of experience in a similar role. Knowledge of the lifecycle of a VFX show Bachelor of Computer Science or equivalent years of experience Ability to handle a fast paced, occasionally high pressure environment Willingness to work very flexible hours when schedules require Experience working in Linux/Unix environment Exceptional written and verbal communication skills in dealing with both technical and artistic groups Experience with wrangling render jobs with priorities in the queue system Understanding of meeting the project goals while monitoring the render farm’s health and status on a regular basis Experience with maximizing the render farm’s resources Monitoring experience for rendering quality and quantity and makie the required corrections when needed Experience with troubleshooting and monitor infrastructure and render farm issues. Escalate problems to the respective teams Nice To Have Analytical and data reporting skills Willingness to work very flexible hours when schedules require Ability to handle a fast paced, occasionally high pressure environment Render tasks estimation experience Ability to provide disk and data resource management services About You Natural problem solver Pro-active and willing to take initiative Committed to achieving targets Impeccable accuracy and attention to details Deadline driven Team oriented and adaptable Skilled multi tasker Flexible and accepting change Calm under pressure and capable of delivering deadlines About Us We are DNEG, one of the world’s leading visual effects and animation companies for the creation of award-winning feature film, television, and multiplatform content. We employ more than 9,000 people with worldwide offices and studios across North America (Los Angeles, Montréal, Toronto, Vancouver), Europe (London), Asia (Bangalore, Mohali, Chennai, Mumbai) and Australia (Sydney). At DNEG, we fundamentally believe that embracing our differences is a vital component of our collective success. We are committed to creating an equitable, diverse and inclusive work environment for our global teams, where everyone feels they matter and belong. We welcome and encourage applications from all, regardless of background, experience or disability. Please let us know if you need any adjustments or support during the application process, we will do our best to accommodate your needs. We look forward to meeting you! Show more Show less

Posted 3 weeks ago

Apply

0.0 - 1.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Company description Prad4x4™ a unit of Prad automotive an ISO 9001:2015 company is a renowned name in Hard Core off-road/expedition equipment, Prad4x4™ products are designed, tested and manufactured in one of the biggest industrial hubs in ASIA, and used and abused by hard core off-roaders nation-wide. The full line of Prad 4x4 products are designed to provide unmatched protection to the Off-road machine and its occupants, and look great while doing it. From CNC laser cutting, CNC bending and tube notching, to high pressure die-forming and precision brake-forming, Prad 4x4 Products are made to handle whatever the trail can throw at you. We also deal with 4x4 accessories such as Expedition roofracks, Bumpers, Bullbar, Rockslides, Underbody Protection, Tyre Carriers, Auxiliary Lamps, Rollcages, Hardtops and etc for Mahindra Thar, Isuzu Dmax Vcross, Mahindra Scorpio, Mahindra Bolero, Tata Safari, Tata Xenon, Scorpio Getaway, Jeep Wrangler, Force Gurkha, Maruti Suzuki Gypsy and other 4x4's and premium SUVs. Job description BE fresher (Inventory/Logistics). Well developed and demonstrated interpersonal skills, team player collaborating with cross-functional teams. Inventory planning support & management to meet customer demand while maximising resource utilisation. Logistics planning & management. Excellent communication - Kannada, Hindi & English skills (fluent in spoken). System admin activities (MS Office / Google Docs) to track inventory and shipping activities. Reliable, proactive, self-motivated, attention to detail. Organised, with proven ability to work autonomously and under tight time constraints. Identify areas for process improvement and drive initiatives to enhance planning, efficiency and accuracy. Must have a sense of ownership and responsibility, and the ability to deliver within given deadlines. Job Type: Full-time Pay: ₹180,000.00 - ₹216,000.00 per year Benefits: Provident Fund Schedule: Morning shift Experience: Inventory management: 1 year (Preferred) Location: Bengaluru, Karnataka (Required)

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Greater Kolkata Area

Remote

Linkedin logo

Who We Are Kontoor Brands, Inc. (KTB) is the parent company of Wrangler®, Lee® and Rock & Republic®, with owned manufacturing facilities in Mexico and Nicaragua. Kontoor also owns and operates over 140 retail stores across the globe. Our global company employs more than 15,000 people in 65 countries, with world headquarters in Greensboro, North Carolina, and regional headquarters in Geneva and Hong Kong. Job Posting Position will be based on Remote, India. This role required you to provide thought leadership and strategies to evolve Global SAP technical landscape, pro-actively contributing to infrastructure and application availability and resiliency through performance tuning. Engineering, Code Development And Operations IT service delivery and operations, including infrastructure provisioning, configuration management, monitoring, deployment, and incident response using IaC platforms, tools, and solutions such as IPSoft Amelia, Red Hat Ansible, OpenShift, Azure DevOps, Terraform, Broadcom-CA Automic, Dynatrace, Azure Monitoring and others to reduce manual efforts and increase productivity and resiliency. Analyze existing complex IT processes and workflows, identify areas for improvement, and develop automation strategies, plan, and coding to increase efficiency, reduce errors, and enhance scalability with proactive monitoring (with tools such as Dynatrace, Azure Monitoring, Etc.) and alerting mechanism to detect and automatically resolve system issues to ensure high availability and performance. Write scripts and develop code using programming languages such as Python, PowerShell, Golang, JavaScript, YAML, JSON, Ruby, Apache Camel or Bash to automate routine tasks, Enterprise Job Scheduling, data processing, proactive monitoring/alerting/events correlation intelligence, break-fix self-healing, and system integration across multi-cloud infrastructure (e.g., Azure, GCP, AWS) both public and private. Utilize orchestration and configuration management tools (e.g., IPSoft Amelia, Ansible) and infrastructure-as-code (IaC) frameworks (e.g., Terraform, Azure DevOps, OpenShift) to automate the provisioning, installation, configuration, and management of IT infrastructure, enterprise job scheduling (Broadcom-CA Automic), and application components to deliver end-to-end self-service IT automation solutions such as Database as a Service, Application as a Service, Disaster Recovery, monitoring/alerting, and backup as a Service. Architecture Standards And Strategy Providing though leadership, strategies, and designs to evolve Global IT Automation journey goals and objectives, pro-actively contributing to infrastructure and application availability and resiliency through automated infrastructure test development lifecycle for IaC platforms, tools, and solutions such as IPSoft Amelia, Red Hat Ansible, OpenShift, Azure DevOps, Terraform, Broadcom-CA Automic, Dynatrace, Azure Monitoring, and others.Providing mentorship, cross-training, and guide other infrastructure team members on IaC automation DevOps CI/CD cultural transformation, infrastructure and application stack, peer review of codes and infrastructure/application technical system and automation design to improve automation code quality and standards and drive wider adoption of IaC automation culture across Kontoor. Bachelor’s degree in computer science, Software Development/Engineering, Infrastructure Technology Automation/Engineering, Industrial Engineering, Management Information Systems, or related field preferred. 12+ Years of progressive Enterprise IT industry experience encompassing a wide range of skill set, roles and industry verticals including a large-scale global IT operation, coding, infrastructure automation, SCRUM/Agile-DevOps/SysOps/CICD/SDLC mythologies, design, implementation, migrations, transformations, deployment rollout including broad experience with driving greenfield IT automation platform’s architecture/ design/ implementation / administration, service management and IT governance/control processes. Platform-as-a-Service, and Software as a Service capabilities including core infrastructure, physical & virtual machines, physical & virtual network/network security, load balancers, and infrastructure applications/platforms (e.g., SQL, NoSQL, IIS, Tomcat, Apache, etc..) and database services. develop code using programming languages such as Python, PowerShell, Golang, JavaScript, YAML , JSON, Azure CLI/ARM, Ruby, Apache Camel or Bash to automate routine tasks, data processing, AI Machine Learning, and system integration across multi-cloud infrastructure (e.g., Azure, GCP, AWS) both public and private. experience DevOps/CICD, software development, infrastructure code automation/development, write scripts, building automation playbooks, and develop code using programming languages such as Python, PowerShell, Golang, JavaScript, YAML , JSON, Azure CLI/ARM, Ruby, Apache Camel or Bash to automate routine tasks, data processing, AI Machine Learning, and system integration across multi-cloud infrastructure (e.g., Azure, GCP, AWS) both public and private. Why Kontoor Brands? At Kontoor, we offer a comprehensive benefit package to fit your lifestyle. Our benefits are crafted with the same care as our products. When our employees are healthy, secure, and well, they bring their best selves to work. Kontoor Brands supports you with a competitive benefits program that provides choice and flexibility to meet your and your family’s needs – now and in the future. We offer resources to support your physical, emotional, social, and financial wellbeing, plus benefits like discounts on our apparel. Kontoor Brands also provides four weeks of Paid Parental Leave to eligible employees who are new parents, Flexible Fridays, and Tuition Reimbursement. We are proud to offer a workplace culture centered on equitable opportunities and a sense of belonging for all team members. Here we have a global workforce of high-performing teams that both unlocks our individual uniqueness and harnesses our collaborative talents. Show more Show less

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies