A Truthful Incentive Mechanism for Mobile CrowdsourcingbyBoya SongB.E., Zhejiang University, 2013A THESIS SUBMITTED IN PARTIAL FULFILMENT OFTHE REQUIREMENTS FOR THE DEGREE OFMASTER OF APPLIED SCIENCEinTHE FACULTY OF GRADUATE AND POSTDOCTORAL STUDIES(Electrical and Computer Engineering)THE UNIVERSITY OF BRITISH COLUMBIA(Vancouver)August 2015c© Boya Song, 2015iiAbstractIn a mobile crowdsourcing system, the platform utilizes ubiquitous smartphones to performsensing tasks. For a successful mobile crowdsourcing application, the consideration of the het-erogeneity of quality of sensing from different users as well as proper incentive mechanismto motivate users to contribute to the system are essential. In this thesis, we introduce qualityof sensing into incentive mechanism design. Under a budget constraint, the platform aims tomaximize the valuation of the performed tasks, which depends on the quality of sensing of theusers. We propose ABSee, an auction-based budget feasible mechanism, which consists of awinning bid selection rule and a payment determination rule. We obtain the approximationratio of ABSee, which significantly improves the approximation ratio of existing budget feasi-ble mechanisms. ABSee also satisfies the properties of computational efficiency, truthfulness,individual rationality, and budget feasibility. Extensive simulation results show that ABSeeprovides a higher valuation to the platform when compared with an existing mechanism in theliterature.iiiPrefaceI hereby declare that I am the author of this thesis. This thesis is an original, unpublished workunder the supervision of Professor Vincent W.S. Wong.ivTable of ContentsAbstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiPreface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iiiTable of Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ivList of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiList of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ixList of Symbols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xAcknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Mobile Crowdsourcing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Incentive Mechanisms and Quality of Sensing . . . . . . . . . . . . . . . . . . 21.3 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.5 Structure of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Table of Contents v2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.1 Incentive Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.2 Quality of Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.3 Incentive Mechanisms and Quality of Sensing . . . . . . . . . . . . . . . . . . 72.4 Budget Feasible Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 Budget Feasible Mechanism for Mobile Crowdsourcing . . . . . . . . . . . . . . 103.1 System Model and Problem Formulation . . . . . . . . . . . . . . . . . . . . . 103.1.1 Auction Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . 103.1.2 Quality of Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123.1.3 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2 Auction-based Budget Feasible Mechanism . . . . . . . . . . . . . . . . . . . 183.2.1 Mechanism Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.2.2 Walk-Through Example . . . . . . . . . . . . . . . . . . . . . . . . . 233.3 Mechanism Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.3.1 Properties of Proposed Mechanism . . . . . . . . . . . . . . . . . . . 273.3.2 Approximation Ratio Analysis . . . . . . . . . . . . . . . . . . . . . . 353.4 Performance Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.4.1 Quality of Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383.4.2 ABSee . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 Conclusion and Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Table of Contents vi4.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 454.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47viiList of Figures3.1 A mobile crowdsourcing system consists of a platform and many smartphoneusers. The auction mechanism is modeled as an interactive process betweenthe platform and the users. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2 Illustration for ABSee. N = {1, 2, 3, 4, 5}, Γ = {τ1, τ2, τ3, τ4, τ5, τ6, τ7},q1 = 0.1, q2 = 0.5, q3 = 0.8, q4 = 0.3, q5 = 0.4, µ1 = 2, µ2 = 4, µ3 = 7,µ4 = 1, µ5 = 5, µ6 = 9, µ7 = 3, B1 = {({τ1, τ2}, 6), ({τ3, τ4}, 6)}, B2 ={({τ2}, 5), ({τ3, τ5}, 10)},B3 = {({τ4, τ6}, 2)},B4 = {({τ3}, 4), ({τ5, τ6}, 8)},B5 = {({τ6}, 2), ({τ7}, 2)}. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243.3 Estimation error versus the number of auctions l (M = 500, N = 1000, G =5000). The smaller value of the estimation error results in the higher accuracy. 393.4 Valuation function V (S) versus the number of tasks M (N = 500, G = 500). . 403.5 Valuation function V (S) versus the number of users N (M = 100, G = 500). . 403.6 Valuation function V (S) versus budget G (M = 100, N = 500). . . . . . . . . 413.7 Crowd factor θ versus the number of users N (M = 500). . . . . . . . . . . . . 423.8 CDF of θ (M = 500, N = 1000). . . . . . . . . . . . . . . . . . . . . . . . . . 42List of Figures viii3.9 θ V˜ (S)V (S) versus the number of users N (M = 100, G = 100). We have θ V˜ (S)V (S) ≤ 2,which validates the approximation ratio. . . . . . . . . . . . . . . . . . . . . . 433.10 Running time versus the number of tasks M (N = 500, G = 100). . . . . . . . 44ixList of AcronymsABSee Auction-based budget feasible mechanismGPS Global Positioning SystemCDF Cumulative Distribution FunctionLTE Long Term EvolutionxList of SymbolsN The set of smartphone usersN The number of usersΓ The set of tasksτk A task in set ΓM The number of tasksBn The set of bids user n submitsσn The total number of bids user n submitsJn Integer set {1, . . . , σn}Γmn A subset of tasks user n can performcmn The cost of user n for performing tasks in set Γmnbmn The reserve price of user n for performing tasks in set ΓmnBmn A bid in Bn, i.e., (Γmn , bmn )List of Symbols xiB The set of all bids from all usersS The set of winning bidsRn(S) The set of tasks within the winning bids S of user npn The payment to user nun The utility of user nqn The quality indicator of user nL The total number of auctions in which user n is a winnerδ(l)k The true value of task τk in the lth auctionδˆ(l)k The estimated value of task τk in the lth auctionδ(l)k,n The value of task τk obtained from user n in the lth auctionqˆ(l)n The quality indicator of user n obtained in the lth auctionq¯(l)n The quality indicator stored in the historical record after the lthauctionγ The weight for the most recent quality indicatorgk(S) The quality of sensing for task τk given the winning bids SV (S) The valuation function the platform obtained from the winningbids SList of Symbols xiiµk The weight of task τkG The budget of the platformF The winning bid selection ruleP The payment determination rulep The payment vector, i.e., (p1, . . . , pN)xi The ith winning bidb¯xi The reserve price for bid xip¯xi The payment for bid xic¯xi The cost for bid xiΓ¯xi The subsets of tasks within bid xiw The number of winning bids in SVxi(Si−1) The marginal contribution of bid xi ∈ B\Si−1 given a bid subsetSi−1 ⊆ B.θ The crowd factor of the systemik The kth winning bid when bid xi is removed from the systemQ The winning bid set after remove a winning bidList of Symbols xiiiw′ The number of winning bids in Qβi(k) The highest reserve price that the user who has the winning bidxi can submit to replace bid ik with xi in position kρi(k) The highest reserve price that the user who has the winning bidxi can submit so that the marginal contribution per reserve priceof xi is not less than a selection thresholdα Approximation ratioV˜ (S) The valuation of the platform obtained from the fractional greedyalgorithmV¯ (S(l)) The valuation of the platform obtained from the estimated qual-ity indicators in the lth auctionxivAcknowledgementsI would like to express my deepest appreciation to my supervisor, Prof. Vincent Wong, for hisinvaluable guidance that helped me to shape my research, and for his persistent encouragementand patience that made me fulfill my master program. Without Prof. Wong’s guidance andsupport, this thesis would have not been possible.I want to thank my senior Dr. Hamed Shah-Mansouri, who has given me constructivecomments and help to my research work. I also would like to thank my colleagues in theCommunication Lab: Zehua Wang, Bojiang Ma, Binglai Niu, Zhiyu Dai, Enxin Yao, YananSun, who have provided me with generous support and great company. I sincerely offer my bestwishes and regards to all those who have helped me during my graduate study. Finally, I ownmy heartfelt gratitude to my beloved parents. Without their love, support and understanding, Iwould have not harvested my invaluable life experiences at UBC.This research is supported by the Natural Sciences and Engineering Research Council(NSERC) of Canada.1Chapter 1IntroductionIn this chapter, we first introduce the background of mobile crowdsourcing and two importantissues in mobile crowdsourcing. Then, we present the motivation and contribution of our work.The structure of the thesis is shown at the end of this chapter.1.1 Mobile CrowdsourcingSmartphones nowadays are equipped with a variety of sensors (e.g., microphone, camera,global positioning system (GPS)) and have enhanced sensing capabilities. Mobile crowdsourc-ing exploits the ubiquity of smartphones and utilizes the sensors to monitor the environment[1]. In a mobile crowdsourcing system, the platfrom distributes the sensing tasks to the smart-phone users to collect data. Smartphone users perform the tasks and send the results to theplatform. Different kind of mobile crowdsourcing applications have been developed. Theyinclude monitoring the environment [2, 3, 4], creating wireless network coverage maps [5, 6],and updating the traffic condition [7].Chapter 1. Introduction 21.2 Incentive Mechanisms and Quality of SensingA key factor for a successful mobile crowdsourcing application is the participation of a largenumber of users. However, performing sensing tasks incurs costs on the smartphone users,such as energy consumption for data sensing, packet transmission charges from the serviceoperator, and manual efforts. In order to motivate smartphone users to participate in mobilecrowdsourcing, the platform should make payments to the users to compensate their costs. Theapplications in [2, 3, 4, 5, 6, 7] are based on voluntary participation of smartphone users, wheretheir participation cannot be guaranteed.Another important issue in mobile crowdsourcing is that the users provide different qualityof sensing. For example, consider a platform which monitors the noise level of a city. Eachsensing task corresponds to obtaining the noise level at a sampling location in the city byutilizing the microphone of smartphones. For a specific user, the quality of sensing depends onthe type of microphone on his smartphone (e.g., accuracy, calibration) and his manual effort(e.g., taking the smartphone out of his pocket to collect the sample). If the platform intendsto obtain a high accuracy of the noise measurement, then it needs to assign tasks and providepayments to the users who can provide a high quality of sensing. However, the quality ofsensing of users may not be known by the platform a priori. Thus, it is required for the platformto estimate the quality of sensing of users accurately and then select the users accordingly.Chapter 1. Introduction 31.3 MotivationMost of the existing works focus on either incentive mechanism design to guarantee the par-ticipation level of users (e.g., [8, 9, 10, 11, 12, 13]) or estimation of quality of sensing (e.g.,[14, 15, 16, 17]). In this thesis, we consider these two issues jointly. We study a practicalscenario where a platform with limited budget aims to maximize the valuation of performedtasks, which depends on the quality of sensing of the users. The platform adopts an incentivemechanism to motivate users to participate. We model the interaction between the platform andthe users as an auction and design a winning bid selection rule and a payment determinationrule. The platform first announces its tasks to the smartphone users. The users then submit theirbids to the platform. Each bid consists of a subset of tasks that the user can perform and theprice that the user asks for these tasks. The platform selects the winning bids. Then the userswho have at least one winning bid will perform the tasks and receive payment accordingly.The budget constraint introduces inter-dependence between the winning bid selection ruleand payment determination rule. It makes the design of auction mechanism challenging. Al-though the existing budget feasible mechanisms, e.g., [18, 19], are applicable for any system,they do not take into account the large number of users in a mobile crowdsourcing system. Inthis case, these mechanisms may not always provide a high valuation to the platform. In ourwork, we consider a practical scenario where there are many participating users and winningbids in the mobile crowdsourcing system. We capture this property by defining a crowd factorθ, where 0 < θ < 1. It describes the relative contribution a winning bid can make to the plat-form. When more bids are submitted and selected as winning bids, the relative contributionChapter 1. Introduction 4that a winning bid can make to the system decreases. We use the crowd factor to design a novelbudget feasible mechanism. The mechanism is applicable not only to mobile crowdsourcingsystems but also to other systems where there are a large number of users.1.4 ContributionsOur main contributions are summarized as follows:• Novel model: We consider quality of sensing and incentive mechanism design jointlyby introducing quality of sensing into the valuation function of the platform, which isproven to be a submodular function. Although it is difficult to measure the quality ofsensing of a user, it can be estimated accurately in our model. We focus on tasks withcontinuous values in this thesis, but our model is also suitable for tasks with discretevalues, including tasks with binary values.• New mechanism: We propose an Auction-based Budget feaSiblE mEchanism, which iscalled ABSee. We select the winning bids by employing an iterative approach and usingthe crowd factor. We rigorously prove that ABSee satisfies the properties of computa-tional efficiency, truthfulness, individual rationality, and budget feasibility.• Approximation ratio: To tackle the computational complexity of the auction design,ABSee adopts a greedy approach. The approximation ratio, which is the ratio betweenthe optimal valuation and the one obtained by the proposed mechanism, is equal to 2eθ(e−1) .Based on the example we provide and simulation results, the approximation ratio canChapter 1. Introduction 5approach 2ee−1 (≈ 3.16) in mobile crowdsourcing systems, which improves the approxi-mation ratio of 5ee−1 (≈ 7.91) in the budget feasible mechanism proposed in [19].• Extensive simulation: Simulation results show that a higher valuation can be obtainedby adopting ABSee when compared with the budget feasible mechanism in [19] underdifferent scenarios. Moreover, it is shown that the quality of sensing can be estimatedaccurately.1.5 Structure of the ThesisThis thesis is organized as follows. In Chapter 2, we introduce the related work. In Chapter 3,we present the budget feasible mechanism, analyze its properties and approximation ratio, andevaluate its performance. Conclusion and future work are given in Chapter 4.6Chapter 2Related WorkIn this chapter, we introduce the related work in incentive mechanism design, quality of sens-ing, and budget feasible mechanisms.2.1 Incentive MechanismsDifferent forms of incentive mechanisms have been proposed in the literature. A platform-centric model and a user-centric model are proposed in [8] to maximize the utility of the plat-form, which is the valuation of performed tasks minus the total payment to the users. An all-payauction is designed in [11] to maximize the utility, where the users’ population is stochastic.In [9], an auction mechanism is proposed to minimize the social cost of smartphone users. Anonline auction mechanism is designed in [13] for dynamic arrival and departure of smartphoneusers, with the goal of maximizing the valuation of performed tasks. Auction mechanisms areproposed in [10] for mobile crowdsourcing systems considering the cooperation and competi-tion among the users. The platform is guaranteed to obtain non-negative utility. An incentivemechanism is presented in [12] to provide the long-term participation incentives, which guar-antees the users will participate for many times. However, quality of sensing is considered inChapter 2. Related Work 7none of the mentioned works.2.2 Quality of SensingExisting works in this area mainly focus on tasks with discrete values. An efficient budgetallocation algorithm is proposed in [14] to guarantee the accuracy of estimation of tasks withbinary values. An online learning approach is adopted in [15] to maximize the quality ofsensing to guarantee the robustness of the crowdsourcing system. In the data mining area, thework in [16] uses expectation maximization and maximum likelihood estimation to determinethe quality of sensing of tasks with binary values in social sensing applications. Davami et al.in [20] compared five trust prediction algorithms and implemented the most accurate one tocalculate the quality of sensing of users for a parking space finding application. Beyond that,an optimization problem is formulated in [17] to obtain the estimated value of the tasks andminimize the estimation error by considering tasks with both continuous values and discretevalues.2.3 Incentive Mechanisms and Quality of SensingBoth incentive mechanism and quality of sensing are considered in a few works. An incentivemechanism is proposed in [21] to guarantee the quality of sensing. However, the platform doesnot have a budget constraint and only has one task. An auction mechanism is designed in [22]to maximize the valuation of the platform, where the quality of sensing of each user is assumedChapter 2. Related Work 8to be known by the platform. A sequential Bayesian approach is used in [23] to determinethe quality of sensing of each user but the auction mechanism is designed specifically fortasks with binary values with prior information of quality of sensing of the users. A quality-based incentive mechanism is designed in [24]. However, it does not consider the strategicbehavior of users, who are interested in maximizing their own utilities and may adopt strategiesto manipulate the mechanism. In contrast to the above mentioned works, given the unknownquality of sensing of users, we consider strategic users and propose an incentive mechanismfor the platform under a budget constraint.2.4 Budget Feasible MechanismsBudget feasible mechanisms for submodular functions are proposed by Singer [18] and im-proved by Chen et al. [19]. Recently, these mechanisms have been used in several applications.For example, an influence maximization problem is studied by utilizing a coverage model anda budget feasible mechanism in [25]. In our work, we propose a new budget feasible mecha-nism by utilizing the crowd factor of the system. The proposed mechanism can improve theapproximation ratio of other budget feasible mechanisms, summarized in Table 2.1. From theexample and simulation experiments to be presented in Sections 3.3 and 3.4, respectively, weshow that θ is close to 1 and the approximation ratio approaches 3.16.Chapter 2. Related Work 9Table 2.1: Comparison of budget feasible mechanismsAR Setting[18] 117.7 A mechanism for general submodular functions[19] 7.91 An improved mechanism for general submodu-lar functions[25] 31 A mechanism for coverage model (i.e., a specialcase of submodular functions)ABSee 3.16θOur proposed mechanism for general submod-ular functionsAR: Approximation ratio.10Chapter 3Budget Feasible Mechanism for MobileCrowdsourcingIn this chapter, we study an auction-based incentive mechanism which considers the hetero-geneity of quality of sensing of users. We first present the system model for the auction mech-anism and quality of sensing. Then we propose ABSee and provide a walk-through exampleof ABSee. We prove the mentioned properties of ABSee and analyze the approximation ra-tio of ABSee. Simulation results show the performance of ABSee outperforms an existingmechanism in the literature and the quality of sensing can be estimated accurately.3.1 System Model and Problem Formulation3.1.1 Auction FrameworkA mobile crowdsourcing system, as shown in Fig. 3.1, is composed of a platform residing inthe cloud computing center and many smartphone users. Users are connected with the cloudvia wireless access networks (e.g., WiFi, LTE). The set of smartphone users is denoted byChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 111. announcing tasks3. determining winning bids5. making payments2. submitting bids4. sending sensed dataPlatformSmartphoneusersFigure 3.1: A mobile crowdsourcing system consists of a platform and many smartphone users.The auction mechanism is modeled as an interactive process between the platformand the users.N = {1, . . . , N}, where N is the number of users. The platform announces the sensing tasksto the smartphone users. We use Γ = {τ1, . . . , τM} to denote the set of tasks and there are Mtasks in total.We model the interactive process between the platform and the users as an auction. Eachuser n ∈ N submits a set of bids Bn. We use σn = |Bn| to denote the total number of bidssubmitted by user n. Each bid is denoted as Bmn = (Γmn , bmn ), m ∈ Jn = {1, . . . , σn}, inwhich Γmn ⊆ Γ is a subset of tasks user n can perform, and bmn is the corresponding reserveprice, which is the price user n asks to be compensated for the cost cmn for performing thetasks. Thus, Bn = {B1n, . . . , Bσnn } = {(Γ1n, b1n), . . . , (Γσnn , bσnn )}. We assume the subsets oftasks submitted by user n are pairwise disjoint.Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 12The platform collects all bids B = ⋃n∈N Bn from the users. Then, it selects a subset S ⊆ Bas the winning bids. LetRn(S) ⊆ Γ denote the set of the tasks within the winning bids of usern, i.e.,Rn(S) ={⋃m∈JnΓmn |Bmn = (Γmn , bmn ) ∈ S}. (3.1)User n is called a winner if Rn(S) 6= ∅. Each winner n performs the tasks within Rn(S) andsends the sensed data to the platform. The platform then makes payment pn to winner n for itscontribution.Users are assumed to be strategic. For each bid Bmn = (Γmn , bmn ) ∈ Bn, user n decides areserve price bmn for Γmn to maximize its own utility. The utility of user n ∈ N isun =pn −∑m∈Jn : Γmn ∈Rn(S)cmn , if n is a winner,0, otherwise.(3.2)Note that bmn determines whether bid Bmn of user n can be selected as one of the winning bidsor not.3.1.2 Quality of SensingIn a mobile crowdsourcing system, the quality of sensing of a user depends on his effort andexpertise to perform the tasks and the quality of the sensors of his smartphone. The quality ofsensing of user n ∈ N , i.e., the accuracy of the sensed data, is modeled by a quality indicatorqn > 0. It is not known a priori and needs to be calculated. Notice that user n may haveChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 13participated in the auction multiple times if the platform conducts the auction many times. Forexample, recall the noise map application in Chapter 1. The platform may want to determine thenoise level of some locations in the city at different times for several days. It keeps a historicalrecord of the quality indicators of different users. The platform collects all sensed data fromthe selected winning bids and calculates the estimated value of the tasks. It then measures thequality indicators of the winners and updates the quality indicators in its historical record.The above model works for any kind of tasks. Consider tasks with continuous values as anexample. Assume user n has participated in the auction and has been a winner in L auctions inthe past. In the lth auction, where l ∈ {1, . . . , L}, we use δˆ(l)k and δˆ(l)k,n to denote the estimatedvalue of task τk performed by the winners and the value of task τk obtained from user n,respectively. The platform can estimate δˆ(l)k accurately by adopting a truth discovery approach[17]. Let qˆ(l)n denote the quality indicator of user n obtained in the lth auction. Then, qˆ(l)n can bemeasured as the variance of sensed data provided by user n for tasks within its winning bids:qˆ(l)n =1|Rn(S(l))|∑τk∈Rn(S(l))(δˆ(l)k,n − δˆ(l)k)2, (3.3)where Rn(S(l)) is the set of tasks within the winning bids of user n in the lth auction. Let q¯(l)ndenote the quality indicator stored in the historical record after the lth auction. Then, q¯(l)n canbe obtained asq¯(l)n = γqˆ(l)n + (1− γ)q¯(l−1)n , l ∈ {1, . . . , L}, (3.4)where constant 0 < γ < 1 is the weight for the most recent quality indicator. Notice that q¯(0)nChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 14represents the initial value of quality indicator, which depends on the mobile crowdsourcingapplication. After the platform has conducted the auction for many rounds, the estimated valueapproaches the true value. For the sake of simplicity, we abuse the notation and remove theround of auction (i.e., (l)) and use qn, since we focus on a specific round of auction.We use gk(S) to denote the quality of sensing for task τk given the winning bids S. Itrepresents the accuracy of the estimated value of task τk after aggregating all sensed data fromthe winning bids. For tasks with continuous values, the accuracy of the estimated value canbe defined by the mean square estimation error. We calculate gk(S) by adopting the maximumlikelihood estimation [26]:gk(S) =∑n∈N :τk∈Rn(S)1qn−1. (3.5)A smaller value of gk(S) represents a higher accuracy of the estimated value of task τk, i.e.,better quality of sensing. We illustrate the model for quality of sensing by using tasks withcontinuous values, but our model is suitable for tasks with discrete values as well by updating(3.3) and (3.5). In statistics, for tasks with discrete values, qˆ(l)n can be estimated by 0-1 lossfunction or squared loss function and gk(·) can denote the estimation error rate [17].3.1.3 Problem FormulationEach task has a different weight, which can be regarded as the importance of that task to theplatform. We use µk > 0 to denote the weight of task τk. Let V (S) denote the valuationChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 15function obtained from the winning bids S. From (3.5), we haveV (S) =∑τk∈⋃n∈N Rn(S)µk log(1 + gk(S)−1)=∑τk∈⋃n∈N Rn(S)µk log1 +∑n∈N :τk∈Rn(S)1qn . (3.6)The log term in (3.6) reflects the diminishing marginal returns on the quality of sensing. Wefirst define the submodular function and then prove that V (S) is a non-negative non-decreasingsubmodular function.Definition 3.1. For a finite set Y , the function f : 2Y 7→ R is submodular if [27]f(C ∪ {y})− f(C) ≥ f(D ∪ {y})− f(D),for any C ⊆ D ⊆ Y and y ∈ Y \ D. Moreover, a submodular function f is non-decreasing iff(C) ≤ f(D) for any C ⊆ D.Lemma 3.1. The valuation V (S) in (3.6) is a non-negative non-decreasing submodular func-tion.Proof. From Definition 3.1, we need to show thatV (S ∪ {Bji })− V (S) ≥ V (Z ∪ {Bji })− V (Z),for any S ⊆ Z ⊆ B and Bji ∈ B \ Z , where i ∈ N , j ∈ Ji, and B is the set of all submittedChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 16bids. To simplify the expression, let ln = 1qn , n ∈ N . According to (3.6), we haveV (S) =∑τk∈⋃n∈N Rn(S)µk log1 +∑n∈N :τk∈Rn(S)ln .RecallRn(S) from (3.1) and we denoteRn(Z) ={⋃m∈Jn Γmn |Bmn = (Γmn , bmn ) ∈ Z}. Givensets S and Z , let sk =∑n∈N :τk∈Rn(S) ln, and zk =∑n∈N :τk∈Rn(Z) ln. We defineΦ1 , Γji⋂(⋃n∈NRn(S))⋂(⋃n∈NRn(Z)),Φ2 , Γji⋂(⋃n∈NRn(Z)) \ (⋃n∈NRn(S),andΦ3 , Γji \((⋃n∈NRn(Z))⋃(⋃n∈NRn(S))).We have Φ1⋃Φ2⋃Φ3 = Γji , while Φ1⋂Φ2 = Φ2⋂Φ3 = Φ1⋂Φ3 = ∅. Then,V (S ∪ {Bji })− V (S) =∑τk∈Φ1µk log(1 + li1 + sk)+∑τk∈Φ2µk log(1 + li) +∑τk∈Φ3µk log(1 + li),V (Z ∪ {Bji })− V (Z) =∑τk∈Φ1µk log(1 + li1 + zk)+∑τk∈Φ2µk log(1 + li1 + zk)+∑τk∈Φ3µk log(1 + li).Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 17The reason for the difference between the second terms in the above equations is that for anytask τk ∈ Φ2, it can be performed by bid Bji or bids Z but it cannot be performed by bids S.Since S ⊆ Z and ln > 0 for n ∈ N , we have 0 ≤ sk ≤ zk. Therefore,V (S ∪ {Bji })− V (S) ≥ V (Z ∪ {Bji })− V (Z).It can be observed that V (S) is also non-negative and non-decreasing, which completes theproof. The platform, which has a limited budget G, aims to maximize its valuation V (S) by select-ing the set of winning bids S. Formally, it designs a mechanismM = (F ,P), which consistsof a winning bid selection rule F and a payment determination rule P . Given all bids B fromall users and budget G as input, F returns a set of winning bids S, and P returns the paymentvector p = (p1, . . . , pN). Our objective is to design a mechanism which satisfies the followingproperties:• Computational Efficiency: Both the winning bids and payments are determined in poly-nomial time.• Truthfulness: Each participating user submits its true cost for all of its bids, i.e., bmn = cmn ,m ∈ Jn, n ∈ N , by which it can maximize its own utility.• Individual Rationality: Each participating user has a non-negative utility, i.e., un ≥ 0,∀ n ∈ N .Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 18• Budget Feasibility: The total payment to the users should be within the limited budget Gof the platform, i.e.,∑n∈N pn ≤ G.The importance of computational efficiency, individual rationality, and budget feasibilityis obvious while truthfulness is necessary to avoid market manipulation. Since the users arestrategic, they will submit bids which maximize their own utilities. However, with truthfulness,the dominant strategy of users is to submit their true costs. These four properties togetherguarantee the success of the auction mechanism.3.2 Auction-based Budget Feasible MechanismIn this section, we first propose a budget feasible mechanism ABSee by designing the winningbid selection rule F and the payment determination rule P . We then illustrate ABSee in detailsby an example.3.2.1 Mechanism DesignBudget feasibility distinguishes our mechanism from many auction mechanisms and makes themechanism design challenging. With the budget constraint, the winning bid selection rule Fand the payment determination rule P are inter-dependent. The platform must ensure the totalpayment for all selected winning bids is within the budget limit, while the payment should bedetermined carefully to ensure truthfulness. To guarantee that ABSee achieves truthfulness,according to the Myerson’s characteristics [28], we haveChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 19Proposition 3.1. An auction mechanism is truthful if and only if:• The winning bid selection rule F is monotone, i.e., if user n wins the auction by biddingbmn for Γmn , it can also win the auction by bidding bˆmn ≤ bmn for Γmn .• User n is paid the threshold payment for winning bid Bmn = (Γmn , bmn ), m ∈ Jn, whichrefers to the highest reserve price the user can submit to win this bid.Since V (S) is a submodular function, we can adopt a greedy approach to design the mono-tone winning bid selection rule F . Given a subset of bids C ⊆ B, the marginal contribution ofbid Bmn = (Γmn , bmn ) ∈ B \ C is defined asVBmn (C) = V (C ∪ {Bmn })− V (C), m ∈ Jn, n ∈ N . (3.7)In F , we sort the bids based on their marginal contributions per reserve price. The ith bid inthe sorted list, denoted by xi, has the largest marginal contribution per reserve price in subsetB \ Si−1, i.e.,xi = argmaxBmn ∈B\Si−1VBmn (Si−1)bmn, (3.8)where Si−1 = {x1, x2, . . . , xi−1} and S0 , ∅. To simplify the expression, for bid xi ∈ B, letb¯xi , c¯xi , and p¯xi denote the reserve price, the cost, and the payment, respectively. Consideringthe submodularity of V (S), the sorting implies:Vx1(S0)b¯x1≥ Vx2(S1)b¯x2≥ · · · ≥Vx|B|(S|B|−1)b¯x|B|. (3.9)Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 20The platform selects the winning bids from set B within limited budget G. To be selected as awinning bid, the marginal contribution per reserve price of bid xi must not be less than V (Si)θG ,whereθ = 1− VmaxV (S) , (3.10)andV max = maxBmn ∈BV ({Bmn }). (3.11)We use θG as a proportion of budget and later show that (3.10) is used to guarantee budgetfeasibility. For a fixed value of θ, let w be the largest index such that Vxw (Sw−1)b¯xw ≥V (Sw)θG , i.e.,b¯xw ≤ θGVxw(Sw−1)V (Sw). (3.12)Then, S = {x1, x2, . . . , xw} is the set of winning bids. However, notice that we cannot obtainθ directly since the winning bids set S is not known a priori. We adopt an iterative approach todetermine θ and S together. The value of θ in iteration t is denoted by θ(t). In each iteration t,the winning bids set S is obtained by utilizing θ(t) and then S is used to calculate θ(t+1). Whenθ(t) increases, there will be more winning bids added to set S and thus V (S) increases. In thiscase, both θ(t) and V (S) increase monotonically. Since S ⊆ B is finite, the iteration converges.When θ decreases, V (S) will be always decreasing and similar results can be obtained. Thus,the convergence of the iteration is guaranteed.According to the Myerson’s characteristics, the goal of the payment determination rule Pis to pay each user the threshold payment for its winning bid. For each winning bid xi ∈ S,Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 21similar to the winning bid selection rule F , we sort the bids in set B′ , B\{xi}, based on theirmarginal contributions per reserve price. We use Qk to denote the first k bids in this sorting,and ik to denote the kth bid, i.e., ik = argmaxBmn ∈B′\Qk−1VBmn (Qk−1)bmn. Then, Vik(Qk−1) is themarginal contribution of the kth bid in this sorting. Similarly, we can obtainVi1(Q0)b¯i1≥ Vi2(Q1)b¯i2≥ · · · ≥Vi|B|−1(Q|B|−2)b¯i|B|−1. (3.13)We use w′ to denote the largest index such thatb¯iw′ ≤ θGViw′ (Qw′−1)V (Qw′). (3.14)Let βi(k) denote the highest reserve price that the user who has the winning bid xi can submitto replace bid ik with xi in the kth position, and ρi(k) denote the highest reserve price that thisuser can submit so that the marginal contribution per reserve price of bid xi is not less thanV (Qk−1∪{xi})θG , i.e.,βi(k) =Vxi(Qk−1)b¯ikVik(Qk−1),ρi(k) = θGVxi(Qk−1)V (Qk−1 ∪ {xi}).Since xi can be placed in any position k from 1 to w′ + 1 to be selected as a winning bid, thepayment p¯xi for xi isp¯xi = maxk∈{1,...,w′+1}{min(βi(k), ρi(k))}. (3.15)The user who has submitted bid xi receives the payment p¯xi .Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 22Algorithm 3.1: ABSee Budget Feasible Mechanism1 Input V (·), G, B/* Winning bid selection rule F */2 θ(1) ← 12 , t← 0, V max ← maxBmn ∈B V ({Bmn })3 do4 t← t + 1, i← 15 S ← ∅, xi ← argmaxBmn ∈BV (Bmn (∅))bmn6 while b¯xi ≤ θ(t)G Vxi(S)V (S∪{xi}) do7 S ← S ∪ {xi}8 i← i + 19 xi ← argmaxBmn ∈B\SVBmn (S)bmn10 end11 if S = ∅ then break;12 θ(t+1) ← 1− V maxV (S)13 while θ(t+1) 6= θ(t);14 θ ← θ(t)/* Payment determination rule P */15 for n ∈ N do16 pn ← 017 end18 for xi ∈ S do19 B′ ← B \ {xi}, Q ← ∅, k ← 020 do21 k ← k + 122 ik ← argmaxBmn ∈B′\QVBmn (Q)bmn23 βi(k) ←Vxi(Q)b¯ikVik (Q), ρi(k) ← θGVxi(Q)V (Q∪{xi})24 p¯xi ← max{p¯xi ,min(βi(k), ρi(k))}25 Q ← Q∪ {ik}26 while b¯ik ≤ θGVik (Q\{ik})V (Q) ;27 for n ∈ N do28 if xi ∈ Bn then pn ← pn + p¯xi ;29 end30 end31 return (S,p)Our proposed ABSee budget feasible mechanism is shown in Algorithm 3.1. Steps 2 to 14show the winning bid selection rule F . We set θ(1)= 12 for initialization. An iterative approachChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 23is adopted to calculate θ. In each iteration (i.e., Steps 4 to 12), the winning bids are selected ina greedy manner according to their marginal contribution per reserve price. The value of θ isobtained from Step 14. Steps 15 to 30 show the payment determination ruleP . The payment toeach user is initialized to 0. Then, for each winning bid, the threshold payment is given to thecorresponding user in Steps 18 to 30 according to (3.15). Note that Step 24 is executed w′ + 1times for each winning bid xi. Finally, we obtain the set of winning bids S and the paymentvector p.3.2.2 Walk-Through ExampleWe use the example in Fig. 3.2 to describe how ABSee works. In this figure, squares representthe users and circles represent the tasks. The numbers above the squares denote the qualityindicator qn of each user. The number below the circles denote the weight µk of each task.Each user n ∈ {1, 2, 4, 5} has two bids, denoted as B1n, B2n, while user 3 only has one bid, i.e.,B13 . Assume the platform has a budget G = 50.We first calculate V max: V max = V ({B24}) = 20.5.Winning bid selection:1. According to Step 2 in Algorithm 3.1, we initialize θ(1) = 12 and calculate V max =V ({B24}) = 20.5. Then, according to Steps 3 to 13, winning bids are selected and θ isupdated in each iteration.2. t = 1. θ(1) = 12 .Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 241 2 3 41 32 4 5 64 7 1 5 920.1 0.5 0.8 0.350.473nqkmFigure 3.2: Illustration for ABSee. N = {1, 2, 3, 4, 5}, Γ = {τ1, τ2, τ3, τ4, τ5, τ6, τ7}, q1 = 0.1,q2 = 0.5, q3 = 0.8, q4 = 0.3, q5 = 0.4, µ1 = 2, µ2 = 4, µ3 = 7,µ4 = 1, µ5 = 5, µ6 = 9, µ7 = 3, B1 = {({τ1, τ2}, 6), ({τ3, τ4}, 6)}, B2 ={({τ2}, 5), ({τ3, τ5}, 10)}, B3 = {({τ4, τ6}, 2)}, B4 = {({τ3}, 4), ({τ5, τ6}, 8)},B5 = {({τ6}, 2), ({τ7}, 2)}.• S = ∅:VB11(∅)b11= V (B11 )b11= 14.46 = 2.4,VB21(∅)b21= 3.2,VB12(∅)b12= 0.9,VB22(∅)b22= 1.3,VB13(∅)b13= 4.1,VB14(∅)b14= 2.6,VB24(∅)b24= 2.6,VB15(∅)b15= 5.6,VB25(∅)b25= 1.9.x1 = B15 , 5.6 =VB15(∅)b15≥ V (∅∪{B15})θ(1)G = 0.4.• S = {B15}:VB11({B15})b11= 2.4,VB21({B15})b21= 3.2,VB12({B15})b12= 0.9,VB22({B15})b22= 1.3,VB13({B15})b13= log 2.25+9 log(1+1.251+2.5 )2 = 1.8,VB14({B15})b14= 2.6,VB24({B15})b24= 5 log 4.3+9 log(1+3.31+2.5 )8 = 1.7,VB25({B15})b25= 1.9.x2 = B21 , 3.2 =VB21({B15})b21≥ V ({B15 ,B21})θ(1)G = 30.425 = 1.2.• S = {B15 , B21}:VB11({B15 ,B21})b11= 2.4,VB12({B15 ,B21})b12= 0.9,VB22({B15 ,B21})b22= 0.7,VB13({B15 ,B21})b13= 1.4,VB14({B15 ,B21})b14= 0.5,VB24({B15 ,B21})b24= 1.7,VB25({B15 ,B21})b25= 1.9.x3 = B11 , 2.4 =VB11({B15 ,B21})b11≥ V ({B15 ,B21 ,B11})θ(1)G = 44.825 = 1.8.• S = {B15 , B21 , B11}:VB12({B15 ,B21 ,B11})b12= 0.1,VB22({B15 ,B21 ,B11})b22= 0.7,VB13({B15 ,B21 ,B11})b13=1.4,VB14({B15 ,B21 ,B11})b14= 0.5,VB24({B15 ,B21 ,B11})b24= 1.7,VB25({B15 ,B21 ,B11})b25= 1.9.Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 25x4 = B25 , 1.88 =VB25({B15 ,B21 B11})b25< V ({B15 ,B21 ,B11 ,B25})θ(1)G =48.625 = 1.94.Thus, S = {B15 , B21 , B11}. θ(2) = 1− VmaxV (S) = 0.54. θ(2) 6= θ(1).3. t = 2. θ(2) = 0.54.• S = ∅: x1 = B15 , 5.6 =VB15(∅)b15≥ V (∅∪{B15})θ(2)G = 0.4.• S = {B15}: x2 = B21 , 3.2 =VB21({B15})b21≥ V ({B15 ,B21})θ(2)G = 30.427 = 1.1.• S = {B15 , B21}: x3 = B11 , 2.4 =VB11({B15 ,B21})b11≥ V ({B15 ,B21 ,B11})θ(2)G =44.827 = 1.7.• S = {B15 , B21 , B11}: x4 = B25 , 1.88 =VB25({B15 ,B21 ,B11})b25≥ V ({B15 ,B21 ,B11 ,B25})θ(2)G = 48.627 =1.79.• S = {B15 , B21 , B11 , B25}:VB12({B15 ,B21 ,B11 ,B25})b12= 0.1,VB22({B15 ,B21 ,B11 ,B25})b22= 0.7,VB13({B15 ,B21 ,B11 ,B25})b13= 1.4,VB14({B15 ,B21 ,B11 ,B25})b14= 0.5,VB24({B15 ,B21 ,B11 ,B25})b24= 1.7.x5 = B24 , 1.7 =VB25({B15 ,B21 ,B11})b25< V ({B15 ,B21 ,B11 ,B25 ,B24})θ(2)G =62.027 = 2.3.Thus, we obtain S = {B15 , B21 , B11 , B25}. θ(3) = 1− VmaxV (S) = 0.58. θ(3) 6= θ(2).4. t = 3. θ(3) = 0.58.• S = ∅: x1 = B15 , 5.6 =VB15(∅)b15≥ V (∅∪{B15})θ(3)G = 0.4.• S = {B15}: x2 = B21 , 3.2 =VB21({B15})b21≥ V ({B15 ,B21})θ(3)G = 30.429 = 1.0.• S = {B15 , B21}: x3 = B11 , 2.4 =VB11({B15 ,B21})b11≥ V ({B15 ,B21 ,B11})θ(3)G = 44.829 = 1.5.• S = {B15 , B21 , B11}: x4 = B25 , 1.88 =VB25({B15 ,B21 ,B11})b25≥ V ({B15 ,B21 ,B11 ,B25})θ(3)G = 48.629 =1.68.Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 26• S = {B15 , B21 , B11 , B25}: x5 = B24 , 1.7 =VB25({B15 ,B21 ,B11})b25< V ({B15 ,B21 ,B11 ,B25 ,B24})θ(3)G =62.029 = 2.1.Thus, we still have S = {B15 , B21 , B11 , B25}. θ(4) = 1− VmaxV (S) = 0.58. θ(4) = θ(3).5. According to Step 14 in Algorithm 3.1, we obtain θ = 0.58.Payment determination:1. x1 = B15 : Winning bids and the next bid after the winning bids are B13 , B21 , B11 , B24 .β1(1) = Vx1 (Q0)b¯11V11 (Q0)= 9 log(1+10.4 )×210 log(1+1/0.8) = 2.78, ρ1(1) =θGVx1 (Q0)V (Q0∪{x1}) = θG = 28.88, β1(2) =2.18, ρ1(2) = 13.09, β1(3) = 2.80, ρ1(3) = 5.83, β1(4) = 3.47, ρ1(4) = 0.14. Thus,p¯x1 = 2.80.2. x2 = B21 : Winning bids and the next bid after the winning bids are B15 ,B14 ,B11 ,B25 ,B13 ,B24 .β2(1) = 3.40, ρ2(1) = 28.88, β2(2) = 7.48, ρ2(2) = 18.19, β2(3) = 4.49, ρ2(3) = 9.63,β2(4) = 5.73, ρ2(4) = 6.66, β2(5) = 6.05, ρ2(5) = 6.17, β2(6) = 6.65, ρ2(6) = 5.45. Thus,p¯x2 = 7.48.3. x3 = B11 : Winning bids and the next bid after the winning bids are B15 , B21 , B25 , B24 , B12 .β3(1) = 2.55, ρ3(1) = 28.88, β3(2) = 4.50, ρ3(2) = 16.19, β3(3) = 7.66, ρ3(3) = 9.27,β3(4) = 8.62, ρ3(4) = 8.55, β3(5) = 16.37, ρ3(5) = 6.71. Thus, p¯x3 = 8.55.4. x4 = B25 : Winning bids and the next bid after the winning bids are B15 , B21 , B11 , B24 .β4(1) = 0.67, ρ4(1) = 28.88, β4(2) = 1.18, ρ4(2) = 7.22, β4(3) = 1.57, ρ4(3) = 3.17,β4(4) = 2.25, ρ4(4) = 2.23. Thus, p¯x4 = 2.23.Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 27Then, p1 = p¯x2 + p¯x3 = 7.48 + 8.55 = 16.03, p5 = p¯x1 + p¯x4 = 2.80 + 2.23 = 5.03,p2 = p3 = p4 = 0.3.3 Mechanism AnalysisIn this section, we first prove that ABSee satisfies all of the properties introduced in Section3.1. Then, we calculate its approximation ratio.3.3.1 Properties of Proposed MechanismTheorem 3.1. ABSee is computationally efficient.Proof. Since calculating V (S) takes O(|B|M) time, the bid with the maximum marginal con-tribution per reserve price takes O(|B|2 M) time to find. Both the winning bid selection rule Fand the payment determination ruleP have nested loops. The while loop inF takes O(|B|3 M)time because the maximum number of the winning bids is |B|. Then, θ(t) is updated if morewinning bids are selected so that the do-while loop also runs at most |B| times. Thus, therunning time of F is O(|B|4 M). Similar to F , the running time of P is also O(|B|4 M). Note that the running time shown above is conservative. In practice, the number of winningbids is much less than |B|.Theorem 3.2. ABSee is truthful.Proof. It is required to show that ABSee satisfies the Myerson’s characteristics. When theauction is conducted multiple times, we obtain quality indicator qn, n ∈ N . In the greedyChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 28approach in F , since a lower reserve price can only put the bid in the same or a prior position,the monotonicity is guaranteed. Notice that the monotonicity is not influenced by the valueof θ. The iteration in the winning bid selection rule F cannot change the monotonicity of thegreedy approach. Thus, we only need to show that the users receive the threshold payments fortheir winning bids. The proof follows the approach presented in [18].Consider winning bid xi. From (3.15), let r ≤ w′ + 1 be the index such that p¯xi =maxk∈{1,...,w′+1}{min(βi(k), ρi(k))} = min(βi(r), ρi(r)). Recall w′ from (3.14). We show thatthe user who has submitted bid xi wins the auction by bidding b¯xi ≤ p¯xi and loses the auctionif b¯xi > p¯xi .If b¯xi ≤ p¯xi , we have b¯xi ≤ βi(r). Bid xi can be placed in the first w′ + 1 positions in thesorted list given by (3.13). We also have b¯xi ≤ ρi(r). Thus, it will be chosen as a winning bid.If b¯xi > p¯xi , we have two cases.Case 1: βi(r) ≤ ρi(r), i.e., p¯xi = βi(r). In this case, xi will be placed after the rth positionin the sorted list. If βi(r) = maxk∈{1,...,w′+1}βi(k), xi cannot be a winning bid. Otherwise, ifβi(r)<βi(k) for some k ∈ {1, . . . , w′ + 1}, we haveρi(k) < βi(r) = p¯xi < b¯xi.Thus, xi cannot be placed at the kth position to be selected as a winning bid.Case 2: βi(r) >ρi(r), i.e., p¯xi =ρi(r). If ρi(r) =maxk∈{1,...,w′+1}ρi(k), xi cannot be a winningChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 29bid. Otherwise, if ρi(r)<ρi(k) for some k ∈ {1, . . . , w′ + 1}, we haveβi(k) < ρi(r) = p¯xi < b¯xi.Thus, xi cannot be a winning bid.Since the winning bid selection rule F is monotone and each user receives the thresholdpayment for its winning bids, we conclude that ABSee is truthful. Theorem 3.3. ABSee is individually rational.Proof. If a user is not a winner, its utility is zero as shown in (3.2). If user n ∈ N is a winner,from (3.2), its utility can also be written as:un = pn −∑m∈Jn:Γmn ∈Rn(S)cmn =∑xi∈Bn∩S(p¯xi − c¯xi).Notice that xi is a winning bid. Thus, user n will have non-negative utility if p¯xi − c¯xi ≥ 0 forany of its winning bid xi. Since ABSee is truthful, we have b¯xi = c¯xi for a winning bid xi. Ifthere exists k from 1 to w′ + 1 such that b¯xi ≤ min(βi(k), ρi(k)), we have b¯xi ≤ p¯xi . We obtainp¯xi − c¯xi ≥ 0 for all xi, and thus un ≥ 0.For winning bid xi, the payment determination rule P implies that Qk = Sk, for k < i,thus Vxi(Si−1) = Vxi(Qi−1). Recall that bid xi is sorted in the ith position in (3.9) among bidsB and bid ik is sorted in the kth position in (3.13) among bids B \ {xi}. Consider the case ofChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 30k = i, we haveb¯xi ≤ θG Vxi(Si−1)V (Si−1 ∪ {xi})= θG Vxi(Qk−1)V (Qk−1 ∪ {xi})= ρi(k). (3.16)From (3.8), we have Vxi(Si−1)b¯xi ≥Vik (Si−1)b¯ik . We can obtain thatb¯xi ≤ Vxi(Si−1)b¯ikVik(Si−1)= Vxi(Qk−1)b¯ikVik(Qk−1)= βi(k). (3.17)From (3.16) and (3.17), we have b¯xi ≤ min{βi(k), ρi(k)} when k = i. This results in b¯xi ≤ p¯xiand completes the proof. To prove the budget feasibility of ABSee, we use the following lemma.Lemma 3.2. The payment p¯xi for each winning bid xi satisfies p¯xi ≤ θGVxi(Si−1)V (Qw′ ) .Proof. For a winning bid xi, from the submodularity of the valuation function, we haveVxi(Si−1) ≥ Vxi(Qk−1), ∀ k ≥ i. (3.18)Let r be the index for which p¯xi = min{βi(r), ρi(r)}. We now prove that r ≥ i. By contradic-tion, assume r < i, we have xr = argmaxBmn ∈B\Sr−1VBmn (Sr−1)bmnfrom (3.8). Thus,Vxr(Sr−1)b¯xr> Vxi(Sr−1)b¯xi.We assume the strict inequality holds to simplify the proof. The same results can be obtainedChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 31if the equality is considered. From the definition of βi(r), we haveVir(Qr−1)b¯ir= Vxi(Qr−1)βi(r).Since Qk = Sk and xk = ik for k < i, we have Vxi(Sr−1) = Vxi(Qr−1), Vxr(Sr−1) =Vir(Qr−1), and b¯xr = b¯ir . According to these equalities, we conclude that p¯xi ≤ βi(r) < b¯xi ,which is contradictory with Theorem 3.3. Hence, we have r ≥ i.If r ≤ w′,Vir(Qr−1)b¯ir≥ Viw′ (Qw′−1)b¯iw′.From (3.18), we havep¯xi ≤ βi(r)= Vxi(Qr−1)b¯irVir(Qr−1)≤ Vxi(Qr−1)b¯iw′Viw′ (Qw′−1)≤ θGVxi(Qr−1)V (Qw′)≤ θGVxi(Si−1)V (Qw′). (3.19)Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 32If r = w′ + 1, we havep¯xi ≤ ρi(r)≤ θG Vxi(Qw′)V (Qw′ ∪ {xi})≤ θGVxi(Si−1)V (Qw′). (3.20)From (3.19) and (3.20), we conclude that p¯xi ≤ θGVxi(Si−1)V (Qw′ ) , which completes the proof. By utilizing Lemma 3.2, we now prove the budget feasibility.Theorem 3.4. ABSee is budget feasible.Proof. To prove∑n∈N pn ≤ G, it is equivalent to prove that∑xi∈S p¯xi≤G. According to thepayment determination rule P , we haveV (S)−V (QW ′)≤ maxBmn ∈BV ({Bmn }).We obtain1− V (QW ′)V (S) =V (S)− V (QW ′)V (S)≤ maxBmn ∈B V ({Bmn })V (S) .Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 33Then,V (QW ′)V (S) ≥ 1−maxBmn ∈B V ({Bmn })V (S)= θ.Thus, V (Qw′)≥θV (S). From Lemma 3.2, we know that for each winning bid xi, the paymentp¯xi ≤ θGVxi(Si−1)V (Qw′ ) . Then, we havep¯xi ≤ Vxi(Si−1)θGV (QW ′)≤ Vxi(Si−1)θGθV (S)= Vxi(Si−1)GV (S) .According to (3.7) and S = {x1, x2, . . . , xw}, we have∑xi∈S Vxi(Si−1) = Vx1(∅)+Vx2(S1)+· · ·+ Vxw(Sw−1) = V (S). We conclude that∑xi∈S p¯xi ≤∑xi∈S Vxi(Si−1)GV (S) = G. We haveQw′ as the winning bids selected from bids B\{xi}. From θ ≤ V (Qw′ )V (S) in the aboveproof, we observe that θ describes the lower bound of the relative valuation when removing awinning bid from the system. Given budget G and the number of tasks M , when more usersparticipate in the system and submit more bids, a winning bid has a smaller relative contributionto the system. We formally define θ as follows.Definition 3.2. We define θ , 1 − V maxV (S) , where V max = maxBmn ∈B V ({Bmn }) from (3.11). θ iscalled the crowd factor.Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 34From Algorithm 1, we have 0 < θ < 1. For a successful mobile crowdsourcing application,there are usually a large number of participating users and the platform selects many winningbids. Thus, V (S) is much larger than V max (i.e., V (S)≫ V max) and θ approaches 1.Consider the following scenario as an example. Each user has the same quality indicatorand submits the same number of bids, i.e., qn = q, σn = σ, n ∈ N . We use Γ¯xi to denote thesubset of tasks within bid xi ∈ B. The subsets of tasks within all bids are pairwise disjoint, i.e.,Γ¯xi∩ Γ¯x′i = ∅, xi, x′i ∈ B. The number of tasks within each bid is also the same, i.e.,∣∣Γ¯xi∣∣ = Υ,xi ∈ B. All tasks have the same weight, i.e., µk = µ, τk ∈ Γ. The reserve price b¯xi is equalto b. Then, we have Vxi(Si−1) = V ({xi}) =∑τk∈Γ¯xi µk log(1 +1q ) = Υµ log(1 + 1q ), whichis denoted by V . Thus, we obtain V (S) = ∑xi∈S Vxi(Si−1) = |S| V , and V max = V . Fromthe inequalities (3.9) and (3.12), the largest index of a winning bid (i.e., w) satisfies Vb ≥ wVθG .Thus, we obtain w =⌊ θGb⌋. When Gb is large, the number of winning bids |S| = w ≈ θGb .From (3.10), we have θ = 1 − V maxV (S) = 1 − VwV = 1 − 1w ≈ 1 − bθG . We solve the equationθ = 1− bθG to obtain the value of θ. Hence, when 4bG < 1, we haveθ = 12(1 +√1− 4bG).For w ≈ θGb , the number of submitted bids should be at least θGb . Since Nσ = |B|, the numberof users should be at least θGσb . Thus, when4bG ≪ 1 and N > Gσb , i.e., N ≫ 4σ , the crowd factorθ approaches 1. Since the conditions 4bG ≪ 1 and N ≫ 4σ can easily be satisfied in a practicalmobile crowdsourcing system, θ is close to 1 with high probability. In Section 3.4, we willshow that θ is close to 1 under different scenarios in a mobile crowdsourcing system.Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 353.3.2 Approximation Ratio AnalysisA mechanism is called α-approximate if it determines a subset S ⊆ B such that opt(B) ≤αV (S), where opt(B) is the optimal value of the following optimization problem:maximizeS⊆BV (S)subject to∑xi∈Sc¯xi ≤ G.(3.21)Problem (3.21) is a budgeted submodular function maximization problem [29]. Similar to theKnapsack problem, problem (3.21) is also an NP-hard problem and the optimal solution cannotbe obtained in polynomial time.To determine the approximation ratio of our proposed mechanism, we first introduce afractional greedy algorithm [19] to solve problem (3.21). Similar to our proposed winningbid selection rule F , we select bids with the highest marginal contribution per reserve priceuntil we cannot add more bids due to budget limit. We assume that the contribution of a bidcan be fractional. Let H be the largest index such that∑Hi=1 c¯xi ≤ G. We define CxH+1 ,G −∑Hi=1 c¯xi and V ′xH+1(SH) , VxH+1(SH)CxH+1c¯xH+1 . The valuation obtained by adopting thefractional greedy algorithm can be defined as:V˜ (S) ,H∑i=1Vxi(Si−1) + V ′xH+1(SH). (3.22)We have the following lemma from [19]:Lemma 3.3. The fractional greedy solution has an approximation ratio of ee−1 for problemChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 36(3.21). That is,opt(B) ≤(ee− 1)V˜ (S),where opt(B) is the optimal value given bids set B.By utilizing Lemma 3.3, we have the following theorem:Theorem 3.5. ABSee achieves an approximation ratio of 2eθ(e−1) .Proof. Recall that the winning bids S = {x1, x2, . . . , xw}. For any i ∈ {w + 1, . . . , H}, wehavec¯xiVxi(Si−1)≥ c¯xw+1Vxw+1(Sw+1)> θGV (Sw+1).Notice that xi is no longer a winning bid when i>w. We obtainc¯xi >θGVxi(Si−1)V (Sw+1)andCxH+1 >θGV ′xH+1(SH)V (Sw+1).Then, we haveθG∑Hi=w+1Vxi(Si−1)+V ′xH+1(SH)∑w+1i=1 Vxi(Si−1)<H∑i=w+1c¯xi + CxH+1≤G.Thus, we obtainH∑i=w+1Vxi(Si−1) + V ′xH+1(SH) <∑w+1i=1 Vxi(Si−1)θ . (3.23)Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 37From (3.22) and (3.23), we haveV˜ (S) =H∑i=1Vxi(Si−1) + V ′xH+1(SH)=w∑i=1Vxi(Si−1) +H∑i=w+1Vxi(Si−1) + V ′xH+1(SH)<w∑i=1Vxi(Si−1) +∑wi=1 Vxi(Si−1) + Vxw+1(Sw)θ≤ (1 + 1θ )w∑i=1Vxi(Si−1) +V maxθ .Recall that V (S) =∑wi=1 Vxi(Si−1) and θ = 1− VmaxV (S) . We haveV˜ (S) < V (S) + V (S)− VmaxV (S)− V max V (S) +V (S)V (S)− V maxVmax= 2V (S)V (S)− V maxV (S)= 2θV (S).From Lemma 3.3, we have opt(B) ≤ ee−1 V˜ (S). Then, we can bound the optimal value asopt(B) ≤ ee− 1 V˜ (S) <2eθ(e− 1)V (S).Thus, ABSee achieves an approximation ratio of 2eθ(e−1) . When θ is close to 1, the approxi-mation ratio approaches 2ee−1 .Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 383.4 Performance EvaluationIn this section, we first evaluate the performance of the approach for estimating the quality ofsensing of each user. We then compare the performance of ABSee with another budget feasiblemechanism GREEDY-SM [19], which also satisfies all the desirable properties.We assume that tasks and users are randomly distributed within a 1 km × 1 km region. Auser can perform a task if the distance between the user and the task is less than 50 m. Weassume each user submits 3 bids (i.e., σn = 3). The cost of user n for Γmn (i.e., cmn ) is ηn|Γmn |,in which ηn is uniformly distributed over [1, 5]. The weight of task τk (i.e., µk) in (3.6) isuniformly distributed over [1, 10]. All results are obtained by averaging over 100 instances.3.4.1 Quality of SensingThe platform calculates and keeps a record of the quality indicators of the participating users.To simulate this process, consider the noise map application. We assume each task correspondsto the noise level of a specific position. The real noise level of task τk in the lth auction, de-noted by δ(l)k , is uniformly distributed over [0, 5]. The sensing data δˆ(l)k,n provided by user nis generated from a Gaussian distribution N(δ(l)k , qn), where the quality indicator qn of eachuser is uniformly distributed over [0, 1]. After collecting sensing data for task τk, the platformcalculates the estimated value δˆ(l)k and updates the estimated quality indicator q¯(l)n . At the be-ginning, the platform assumes that all users have the same estimated quality indicator as 0.5.The weight γ in (3.4) is selected as 0.5.We use V (S) to denote the valuation that the platform can obtain given the quality indicatorChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 390 10 20 30 40 5000.050.10.150.20.250.30.350.4Auction lEstimationErrorFigure 3.3: Estimation error versus the number of auctions l (M = 500, N = 1000, G = 5000).The smaller value of the estimation error results in the higher accuracy.qn of the users is known by the platform. Similarly, the valuation that the platform obtainsaccording to its historical record of the estimated quality indicator q¯(l)n is denoted by V¯ (S(l)).Then, the estimation error of the quality of sensing can be calculated as |V (S)−V¯ (S(l))|V (S) . Fig.3.3 shows the estimation error of the quality of sensing. Results show that when the platformconducts more auctions, it obtains more accurate estimation of the quality indicators and selectsthe users more properly to perform the tasks.3.4.2 ABSeeFigs. 3.4, 3.5, and 3.6 show the valuation function V (S) obtained from both mechanisms. Wecan see that ABSee significantly improves the valuation obtained from GREEDY-SM. In Fig.3.4, the value of V (S) increases when there are more tasks. According to (3.6), an increasein the number of tasks can provide a higher valuation to the platform. The platform can alsoChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 400 100 200 300 400 500050010001500200025003000Number of Tasks MV(S) ABSeeGREEDY−SMFigure 3.4: Valuation function V (S) versus the number of tasks M (N = 500, G = 500).0 500 1000 1500 2000 25000300600900120015001800Number of Users NV(S) ABSeeGREEDY−SM Figure 3.5: Valuation function V (S) versus the number of users N (M = 100, G = 500).obtain a higher valuation when the number of users increases as shown in Fig. 3.5. In thiscase, the number of users who can perform the tasks becomes larger. Therefore, those bidsChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 410 1000 2000 3000 4000 5000050010001500Budget GV(S) ABSeeGREEDY−SMFigure 3.6: Valuation function V (S) versus budget G (M = 100, N = 500).with lower reserve price but higher quality of sensing can be chosen. From Fig. 3.6, the morebudget the platform has, the higher valuation it can obtain. This is because it can select morewinning bids to have the tasks performed.Fig. 3.7 shows the crowd factor θ versus the number of users. When there are more users ormore budget, the number of winning bids increases and thus the crowd factor becomes larger,which captures a practical mobile crowdsourcing system. Fig. 3.8 further shows the cumulativedistribution function (CDF) of θ. Among 100 experiments, we can see that θ is close to 1 withhigh probability.In Fig. 3.9, we verify the approximation ratio of ABSee. Since problem (3.21) is an NP-hard problem, it is time consuming to obtain the optimal value. Thus, we cannot compare theoptimal value and the value obtained from ABSee directly. We circumvent issue by comparingChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 420 500 1000 1500 2000 25000.50.60.70.80.91Number of Users NCrowdFactorθ G = 5000G = 500Figure 3.7: Crowd factor θ versus the number of users N (M = 500).0 0.2 0.4 0.6 0.8 100.20.40.60.81Crowd factor θCDF G = 5000G = 500Figure 3.8: CDF of θ (M = 500, N = 1000).V˜ (S) and V (S). Recall that V˜ (S) is the valuation obtained from a fractional greedy algorithm.We know opt(B) ≤ ee−1 V˜ (S) < 2eθ(e−1)V (S) from Section 3.3. Thus, we can validate theChapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 43500 1000 1500 2000 250000.511.52Number of users NθV˜(S)V(S)Figure 3.9: θ V˜ (S)V (S) versus the number of users N (M = 100, G = 100). We have θ V˜ (S)V (S) ≤ 2,which validates the approximation ratio.approximation ratio by showing θ V˜ (S)V (S) < 2. Fig. 3.9 confirms that θV˜ (S)V (S) is always smallerthan 2. Note that θ is close to 1 when the number of users and budget are large. Thus, theapproximation ratio opt(B)V (S) is always less than2eθ(e−1) , which approaches to2ee−1 .In Fig. 3.10, we compare the running time between ABSee and GREEDY-SM. ABSee isslightly slower than GREEDY-SM since it needs to calculate the crowd factor θ iteratively.However, it significantly improves the valuation of the platform.Chapter 3. Budget Feasible Mechanism for Mobile Crowdsourcing 440 50 100 150 200 250 30005101520Number of Tasks MRunning Time (sec) ABSeeGREEDY−SMFigure 3.10: Running time versus the number of tasks M (N = 500, G = 100).45Chapter 4Conclusion and Future WorkIn this chapter, we conclude the thesis by summarizing the research work. We also suggestsome possible extensions for future wok.4.1 ConclusionIn this thesis, we considered quality of sensing of the smartphone users in a mobile crowd-sourcing system. The platform aims to maximize the valuation of the performed tasks with alimited budget. The platform estimates the quality of sensing of the users and keeps a histor-ical record. We designed an auction-based budget feasible mechanism called ABSee, whichselects the winning bids and determines the payment to the users. In addition to budget fea-sibility, we proved that ABSee satisfies computational efficiency, truthfulness, and individualrationality. We also proved that ABSee can achieve an approximation ratio of 2ee−1 in the mo-bile crowdsourcing system. Simulation results showed that the quality of sensing of the userscan be estimated accurately and the platform can obtain a higher valuation when it implementsABSee in comparison with GREEDY-SM [19].Chapter 4. Conclusion and Future Work 464.2 Future WorkIn terms of future work, possible extensions are as follows:1. In this thesis, we consider the platform collects sensing data from many smartphone userswith strategic behaviors. In this case, there is only one mobile crowdsourcing serviceconsumer, i.e., the platform, and multiple mobile crowdsourcing service providers, i.e.,smartphone users. In practice, there may exist many service consumers, all of which aimto maximize their valuations. Thus, developing a double auction mechanism would be achoice to solve this problem.2. In this thesis, we propose a novel budget feasible mechanism for the platform with abudget constraint. We can further explore the mechanism design when the platform aimsto minimize the total payment to the users, i.e., considering frugality in auctions.3. In this thesis, smartphone users are assumed to submit some random subsets of taskswithin the bids to simplify the model so that we can focus on mechanism design andimprove the valuation of the platform. However, from the perspective of the users, theycan develop some strategies and find specific combinations of tasks to maximize theirutilities. This would be another interesting topic to explore.47Bibliography[1] R. Ganti, F. Ye, and H. Lei, “Mobile crowdsensing: Current state and future challenges,”IEEE Communications Magazine, vol. 49, no. 11, pp. 32–39, Nov. 2011.[2] SmartCitizen, “Smartcitizen,” 2015, online; accessed 10-July-2015. [Online]. Available:https://smartcitizen.me/.[3] buUuk Pte Ltd, “Weatherlah,” 2015, online; accessed 10-July-2015. [Online]. Available:http://www.weatherlah.com/.[4] Technische Universita¨t Darmstadt, “Noisemap,” 2015, online; accessed 10-July-2015. [Online]. Available: https://play.google.com/store/apps/details?id=de.tudarmstadt.tk.noisemap.[5] OpenSignal, “Opensignal,” 2015, online; accessed 10-July-2015. [Online]. Available:http://opensignal.com/.[6] Sensorly, “Sensorly,” 2015, online; accessed 10-July-2015. [Online]. Available:http://www.sensorly.com.[7] Waze Mobile, “Waze,” 2015, online; accessed 10-July-2015. [Online]. Available:https://www.waze.com.Bibliography 48[8] D. Yang, G. Xue, X. Fang, and J. Tang, “Crowdsourcing to smartphones: Incentive mech-anism design for mobile phone sensing,” in Proc. of ACM MobiCom, Istanbul, Turkey,Aug. 2012.[9] Z. Feng, Y. Zhu, Q. Zhang, L. Ni, and A. Vasilakos, “TRAC: Truthful auction for location-aware collaborative sensing in mobile crowdsourcing,” in Proc. of IEEE INFOCOM,Toronto, Canada, Apr. 2014.[10] X. Zhang, G. Xue, R. Yu, D. Yang, and J. Tang, “Truthful incentive mechanisms forcrowdsourcing,” in Proc. of IEEE INFOCOM, Hong Kong, Apr. 2015.[11] T. Luo, H.-P. Tan, and L. Xia, “Profit-maximizing incentive for participatory sensing,” inProc. of IEEE INFOCOM, Toronto, Canada, Apr. 2014.[12] L. Gao, F. Hou, and J. Huang, “Providing long-term participation incentive in participa-tory sensing,” in Proc. of IEEE INFOCOM, Hong Kong, Apr. 2015.[13] D. Zhao, X.-Y. Li, and H. Ma, “How to crowdsource tasks truthfully without sacrificingutility: Online incentive mechanisms with budget constraint,” in Proc. of IEEE INFO-COM, Toronto, Canada, Apr. 2014.[14] L. Tran-Thanh, M. Venanzi, A. Rogers, and N. R. Jennings, “Efficient budget allocationwith accuracy guarantees for crowdsourcing classification tasks,” in Proc. of ACM Int’lConf. on Autonomous Agents and Multiagent Systems (AAMAS), Saint Paul, MN, May2013.Bibliography 49[15] K. Han, C. Zhang, and J. Luo, “BLISS: Budget limited robust crowdsensing throughonline learning,” in Proc. of IEEE SECON, Singapore, Jun. 2014.[16] D. Wang, L. Kaplan, H. Le, and T. Abdelzaher, “On truth discovery in social sensing: Amaximum likelihood estimation approach,” in Proc. of ACM IPSN, Beijing, China, Apr.2012.[17] Q. Li, Y. Li, J. Gao, B. Zhao, W. Fan, and J. Han, “Resolving conflicts in heterogeneousdata by truth discovery and source reliability estimation,” in Proc. of ACM SIGMOD,Snowbird, UT, Jun. 2014.[18] Y. Singer, “Budget feasible mechanisms,” in Proc. of IEEE Symposium on Foundationsof Computer Science (FOCS), Las Vegas, NV, Oct. 2010.[19] N. Chen, N. Gravin, and P. Lu, “On the approximability of budget feasible mechanisms,”in Proc. of ACM-SIAM Symposium on Discrete Algorithms (SODA), San Francisco, CA,Jan. 2011.[20] E. Davami and G. Sukthankar, “Improving the performance of mobile phone crowdsourc-ing applications,” in Proc. of ACM Int’l Conf. on Autonomous Agents and MultiagentSystems (AAMAS), Istanbul, Turkey, May 2015.[21] I. Koutsopoulos, “Optimal incentive-driven design of participatory sensing systems,” inProc. of IEEE INFOCOM, Turin, Italy, Apr. 2013.Bibliography 50[22] H. Jin, L. Su, D. Chen, K. Nahrstedt, and J. Xu, “Quality of information aware incentivemechanisms for mobile crowd sensing systems,” in Proc. of ACM MobiHoc, Hangzhou,China, Jun. 2015.[23] Q. Zhang, Y. Wen, X. Tian, X. Gan, and X. Wang, “Incentivize crowd labeling underbudget constraint,” in Proc. of IEEE INFOCOM, Hong Kong, Apr. 2015.[24] D. Peng, F. Wu, and G. Chen, “Pay as how well you do: A quality based incentive mech-anism for crowdsensing,” in Proc. of ACM MobiHoc, Hangzhou, China, Jun. 2015.[25] Y. Singer, “How to win friends and influence people, truthfully: Influence maximizationmechanisms for social networks,” in Proc. of ACM WSDM, Seattle, WA, Feb. 2012.[26] I. Koutsopoulos and M. Halkidi, “Lifetime maximization in wireless sensor networks withan estimation mission,” in Proc. of IEEE GLOBECOM, Honolulu, HI, Nov. 2009.[27] D. M. Topkis, Supermodularity and Complementarity. Princeton University Press, 1998.[28] R. B. Myerson, “Optimal auction design,” Mathematics of Operations Research, vol. 6,no. 1, pp. 58–73, Feb. 1981.[29] A. Krause and C. Guestrin, “A note on the budgeted maximization of submodular func-tions,” CMU Technical Report, pp. CMU–CALD–05–103, 2005.
- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- A truthful incentive mechanism for mobile crowdsourcing
Open Collections
UBC Theses and Dissertations
Featured Collection
UBC Theses and Dissertations
A truthful incentive mechanism for mobile crowdsourcing Song, Boya 2015
pdf
Page Metadata
Item Metadata
Title | A truthful incentive mechanism for mobile crowdsourcing |
Creator |
Song, Boya |
Publisher | University of British Columbia |
Date Issued | 2015 |
Description | In a mobile crowdsourcing system, the platform utilizes ubiquitous smartphones to perform sensing tasks. For a successful mobile crowdsourcing application, the consideration of the heterogeneity of quality of sensing from different users as well as proper incentive mechanism to motivate users to contribute to the system are essential. In this thesis, we introduce quality of sensing into incentive mechanism design. Under a budget constraint, the platform aims to maximize the valuation of the performed tasks, which depends on the quality of sensing of the users. We propose ABSee, an auction-based budget feasible mechanism, which consists of a winning bid selection rule and a payment determination rule. We obtain the approximation ratio of ABSee, which significantly improves the approximation ratio of existing budget feasible mechanisms. ABSee also satisfies the properties of computational efficiency, truthfulness, individual rationality, and budget feasibility. Extensive simulation results show that ABSee provides a higher valuation to the platform when compared with an existing mechanism in the literature. |
Genre |
Thesis/Dissertation |
Type |
Text |
Language | eng |
Date Available | 2015-08-11 |
Provider | Vancouver : University of British Columbia Library |
Rights | Attribution-NonCommercial-NoDerivs 2.5 Canada |
DOI | 10.14288/1.0166520 |
URI | http://hdl.handle.net/2429/54355 |
Degree |
Master of Applied Science - MASc |
Program |
Electrical and Computer Engineering |
Affiliation |
Applied Science, Faculty of Electrical and Computer Engineering, Department of |
Degree Grantor | University of British Columbia |
Graduation Date | 2015-09 |
Campus |
UBCV |
Scholarly Level | Graduate |
Rights URI | http://creativecommons.org/licenses/by-nc-nd/2.5/ca/ |
Aggregated Source Repository | DSpace |
Download
- Media
- 24-ubc_2015_september_song_boya.pdf [ 360.39kB ]
- Metadata
- JSON: 24-1.0166520.json
- JSON-LD: 24-1.0166520-ld.json
- RDF/XML (Pretty): 24-1.0166520-rdf.xml
- RDF/JSON: 24-1.0166520-rdf.json
- Turtle: 24-1.0166520-turtle.txt
- N-Triples: 24-1.0166520-rdf-ntriples.txt
- Original Record: 24-1.0166520-source.json
- Full Text
- 24-1.0166520-fulltext.txt
- Citation
- 24-1.0166520.ris
Full Text
Cite
Citation Scheme:
Usage Statistics
Share
Embed
Customize your widget with the following options, then copy and paste the code below into the HTML
of your page to embed this item in your website.
<div id="ubcOpenCollectionsWidgetDisplay">
<script id="ubcOpenCollectionsWidget"
src="{[{embed.src}]}"
data-item="{[{embed.item}]}"
data-collection="{[{embed.collection}]}"
data-metadata="{[{embed.showMetadata}]}"
data-width="{[{embed.width}]}"
async >
</script>
</div>
Our image viewer uses the IIIF 2.0 standard.
To load this item in other compatible viewers, use this url:
http://iiif.library.ubc.ca/presentation/dsp.24.1-0166520/manifest